 So, welcome. Good morning or good afternoon depending on where you are. Welcome to this open air open access week. We've been presenting during this week. Some of our services in the series and the webinar series during the morning. And in the afternoon. We've had some panel sessions and knowledge cafe is where we are debating. Also some open science related issues. And just some housekeeping rules before we start. So this event will be recorded. We ask you all to have your microphones off during the presentations if you want to participate you can start and introduce yourself to interact with the participants using the chat or address some questions to them to the speakers, or at the end of the of the presentations and just raise your hand and open your microphone and present your your question or doubt or comment to the to the speakers. We will share with you the this recording and the presentations that you will see and please use our hashtags open air and the score you or a week or open air services or identify us in the social media at in open air or in LinkedIn Facebook and share this your ideas and thoughts about the sessions with us. At the end, we will share with you. My colleague under will share with you in the chat the link because we will, we want to have your opinion about the session so we will run a short evaluation forum for you to give us your feedback. So today, we're having a session dedicated to one of our services explore the discovery service of open air. And this session will be dedicated to more to discovering research with focus on the SCD is the sustainable development goals and fields of science classifications. And I was because will be Constantine Galoni. I hope I spelled it right. And sorry, if I didn't. So, now I will give the floor to them and hope you enjoy the session. Thank you, Constantine and Harris, the floor is yours. Okay. Thank you so let me. Let me share my screen. Again, can you see my screen. Yes, yes. Okay. We welcome you all in this session, and they were very excited having you in this presentation and the discussion about two novel services hosted by open air and provided by open air explore and powered by signable signable is actually the research development team, but I'm leading in a tenner research center, working on science of science. Okay, let's briefly see the agenda. So we start with the field of science and the discipline classification system that we have been developing, and it is already in place in the open air explore. And see what is the rationale behind this classification system. What is the state of play. For example, we do know that there is a need for publication based classification system. Mostly what we see in the market it is a venue or a venue based classification system. For example, a journal classifies all the papers in in some fields of sciences, fields of science, and not specifically do it for paper by paper. So we go at a more fine grained level, there is a need to go deeper and see and take into consideration, not only the structural metadata of the publications, but also its content. So we do need a hierarchical classification system that provides these fields these labels and categorizes publications in more than one fields, according to its interdisciplinarity or multidisciplinary content. So this is the first service and we will see all the details. Then we move to sustainable development goals and the classification system that we have been developing. And the idea here is to map out the scientific developments and outputs to SDGs by assigning to those scientific, scientific assets, assigning them one or more SDG classes. Finally, we go to OpenAir Explorer, which is a discovery portal of open science scholarly works on top of OpenAir research graph. And here we go through and take a glimpse of OpenAir Explorer and its main functionality provided to the end users. So we will be presenting both services, both field of science and SDG classification, mostly in technical terms, and the focus is on the technology in this presentation, other meaning both services. The services, as I said, are powered by Sinovo, science, no borders. This is what it means. Both services were developed in the context of the Intelcom project, a Horizon Europe project in Intelcom, where the main goal is to deliver a platform assisting and facilitating the whole spectrum of evidence based AI driven STI policy. And by doing this, it is in the area of policy intelligence and how we can help policy makers and decision makers and funders by providing them insights and KPIs. And all these KPIs are being calculated based on some analytics, a street of analytical tools for STI analysis. This is one of the objectives of the project. And under this objective, it is the work that we have been doing by developing all these services and the tools that we will be talking about today. So, let's go, let's start with the field of science classification. Sotiris, my colleague Sotiris Korticius has already said that he will give us a very good technical presentation about the system. Sotiris. Yes, can I share my screen actually? Yeah. Yes, of course. Can you see the previous screen? My screen actually? Yes, yes. Okay, hello everyone. My name is Sotiris Korticius and I'm a research associate of Athena Research Center. In the following presentation, I will present the field of science classifier that the Sinovo team has developed. The following work was also published at CYK 2022, which was collocated with the web conference. Okay. In this work, we study the field of science classification methods. And as you can see in the example, our end result, there are various levels at various levels of granularity such as natural sciences for level one and optics for level three. The field of science classification is of crucial importance if we consider the increasing number of scientific applications and how a first classification systems can power a wealth of applications for scientific literature, including search engines, recommendation systems and science monitoring. Furthermore, it allows funders, publishers, scholars, companies and other stakeholders to organize scientific literature more effectively, calculate impact indicators and identify emerging fields of science, which can also facilitate towards science, technology, innovation, policymaking in various sectors like climate change. Regarding the state of play, most of the methods that currently exist perform field of science classification by relying on the published general of the paper, and it's a first classification or a second classification of activities, abstracts and other keywords. In addition, some of them are focused on a specific domain like computer science. However, more and more generals tend to be multidisciplinary. Furthermore, the external metadata are not always available and some methods have trouble discriminating between field of science labels that share similar vocabularies like materials and metallurgy. Our first classifier on the other hand focuses on leveraging the structural properties of the application through citations and references, organizing them in a multilayer network. Finally, we also create a hierarchical field of science taxonomy across all domains that extends the OECD disciplines with science metrics codes. Before I go into detail regarding our classification approach, I will briefly describe the field of science taxonomy. The information field of science taxonomy is based on the OECD disciplines and the science metrics classification labels. We identify them and we manually link the science metrics labels to the level to OECD labels. The resulting field of science taxonomy is used as a classification scheme in our first classifier. In the slide, we also present some examples from the field of science taxonomy along with statistics of the three levels regarding the number of FOS labels per level. Finally, the extension of our field of science taxonomy in three more levels, namely for five and six, is currently work in progress and we utilize community detection and neural topic modeling to generate these extra three levels and also try to discover emerging and advancing field of science fields. This slide is an overview of the classifier and the integration behind it is that a publication mostly sites thematically related to publications. To create the classifier, we employ a multilayer graph approach. We try to bridge venues and publications by constructing a multilayer network. And the notes in the graph can be venues, meaning journals and conferences. The field of science labels of the venues and publications. The edges reflect the layers of the multilayer graph and can be publication to publication edges, reflecting, citing and cited relationships between the publications can be publications to venue edges, which are constructed the different times, different time, different time and it means that a publication and was published to that venue, venue to venue edges reflecting a citing and cited relationships in their respective publications, venue to a field of science labels which are provided by the field of science metrics journal journal classification and hierarchical edges between field of science labels and publications to field of science labels which are the end result. The classification step consists of classifying a publication based on the outside patients and inside patients were outside patients, referred to the publishing venues of the publications, it references, and inside patients to the publishing venues of the publications, it gets cited by people. Furthermore, we can also see some pros and cons real quickly some process that the classifications with minimal metadata, and we can also classify publication from the very first day, utilizing its references, some disadvantages are that the first graph needs constant updates, because publications receive more and more citations in the course of time. And because we, we seed the venue to FOS labels through science metrics, very few venues have labels at the initial graph creation. In the previous slide we basically described the graph representation step. Here we can give you a complete pipeline of the classifier. During the next slides, I will describe the red boxes presenting presented in this slide, which are the graph creation, the label propagation which an iterative process and the inference step were given a publication and the required metadata the classifier cannot put field of science labels. To create our graph, we exploded the open air research graph and we retweet all the publications in the years between 2016 and 2021, along with references and citations when available. Furthermore, for every publication, we try to retrieve the publishing venue and the publishing venues of its references and citations. We also perform venue data application trying to map the venues to the abbreviation. For example, all the instances of association for computational linguistics that you can see in this slide they should be mapped to ACL. As a result, we can create venue to venue edges creating the initial graph. The weight of the edges are the amount of times a venue site by your site's another venue. Finally, to create the venue to a first edges, we utilize the scientific general classification by linking the first general labels to the venue notes in the graph. Initially, a small, a small portion of the venues have filled of science labels, as I mentioned in the previous slide, which we hope to alleviate using label propagation. Okay, regarding label propagation, the intuition behind it is that a venue is more likely to express the field of science venue, the field of science label of its most referenced venues, like a nearest neighbor classification setting. We utilize the venue to venue edges and the neighborhood context of the graph to enrich the initial labeling from science metrics. We dynamically evaluate the initial labeling and after a few rounds, some single labeled venues might become a multi labeled and I will demonstrate the labor propagation with the every single example here. And the graph presented a source for venues, which are the orange nodes, and they are connected to each other through citing excited relationships with the red weights. The red and dry NLP are also connected to the field of science labels from science metrics with green with green weights, which represent the confidence of a venue to have these labels. And now to propagate label information from ACL to Rhine from ACL and Rhine LP to MNLP, we can basically multiply the weights in the path, and we multiply the red weights with the green weights. And as a result, we can assign to MNLP field of science labels with a certain confidence. Finally, I will present an FOS classification example. On the image we can view the publication we want to classify along with the title and the abstract. Now, given the DOE, we retrieve the metadata of this publication at the first step. The metadata, however, that we need to classify are the published venue, the citing venues, which are the venues that cite the publication and the reference venues. In the last step, we preprocessed the venue names, deduplicating them and aggregating their occurrences. In the last step, we input the required metadata to the FOS class file and by utilizing the same mechanism as in label propagation, we can now infer the field of science labels of the publication. This is the choice of propagating information from the venue level to the publication level, only through the published venue, which simulates a general classification approach through the referenced venues of the publications or through the referenced and citing venues of the classifications. And we can also infer by utilizing all of the above, the published venue, the referenced venues and the cited and cited venues. This, okay. Thank you. This was the presentation regarding the field of science classification. Now I will stop sharing and pass slides to my colleague Dimitris Papas, which will present the stage classification system. So thank you, Sotiris, for this presentation. The critical point in FOS in field of science classification is that it is a hierarchical classification system that works on the publication level, not at the venue level. This is its main advantage. It respects well established taxonomies in science, mapping by doing a mapping to those taxonomies, and the third, it is incremental, incremental in the sense that it takes into consideration the data as it becomes, as this become available for a specific publication, which means the metadata, like the venue, the publisher and so on in the beginning and later on the citations and the references of the publication. And last but not least, the content, the actual title and the abstract of the publication in order to classify the publication as accurately as possible. This is one of the main features of this classification system, and we will see later on how this can be is visualized and can be used by the end users in open air explore. Let's move to the SDG classification system now Dimitris, you have a floor. Let me share my screen, start the presentation. So I will share my entire screen would be better. Okay, so about the SDG classification system I will give you an overview of the entire system that we have developed for SDG classification of a scientific publications. So, why do we use, why do we want to classify documents in the SDG categories. So SDG categories are a set of 17 goals aiming to transform the world over the next 15 years as decided by the United Nations. The goals are designed to eliminate poverty, discrimination, preventable deaths and so on. And this is something that policymakers need to classify research according to these SDGs, in order to monitor, evaluate the societal impact, or for policy making in general. What we have done right now is that we have developed an SDG classifier and classified approximately 8 million applications that add DOI, DOI, in their metadata, and these data can be found in the open air graph. And we have built a silver corpus for SDG classification because there is no actually a gold SDG classification corpus to that end. And our classifier uses the title and the abstract of a publication in order to classify it into one or more SDG categories. So as I said, the SDG goals are 17 and its goal has from five to seven to 19 targets for each SDG category. So the idea is if we use the open air graph and the etysaurus that we collected from Anbis, we can build a classifier that makes some first predictions for the publications, which can then be curated by humans. And these curations, these annotations can then come back as a feedback to the machine, which can tune itself and give better results. So we started with Anbis and the set of predefined key phrases and key phrase combinations for each SDG category. And for example, for SDG one, we have phrases like homeless income, the SDG two, we can see group diversity and so on. But it was, it is not always just a phrase, but combination phrases, because I didn't add them here because it wouldn't fit in one slide. You can see the link at the bottom of the slide. So, using these key phrases and combinations. We try to collect publications from the Microsoft academic graph which contains approximately 120 million publications and create a small corpus for each SDG category. We keep 90% of them for training and 10% of them for development. So, we started with Anbis and with these SDG key phrases, we queried an elastic search server where we had indexed the entirety of a Microsoft academic graph. And we collected all the publications that matched even one of these phrases and collected a little collection of SDG specific articles. From these SDG articles, we use unsupervised key term extraction to extract more key phrases. And also we use the metadata to inspect the venues where we can find in this specific collection. So, if a human inspected these key phrases, he could use here so he could decide whether we should include the automatically extracted key phrases to our service and experiment this vocabulary. So, we use the name for Crossref, so we use the key phrase one more and create crossref collections for each SDG category. And once again, we use the automatic key term extraction to extract key phrases that maybe can expand the SDG key phrases from Anbis. So, after we have done this for three cycles, so we expand the SDG key phrases on the left and once again we query our database to expand our collection of articles, and then again we apply key term extraction and so on. This cycle is performed for three times and we finished in the third cycle, and we had a first collection of SDG publications. So, using these SDG collections, we applied guided topic modeling. In guided topic modeling, you can redefine the prior probability to specific topics and assign keywords or tokens that you and that you definitely would like to see in a specific topic. So, since we had key phrases for each SDG, we assigned these key phrases to specific topics and forced the topic model to extract topics for each SDG category. So, we applied guided LDA to both Crossref collections and MAC collections and we extracted 34 topics for its collection. So, you can see here that for the Crossref topic model, for example, in topic 33, you can see malaria tuberculosis drug treatment, which is which would probably would be SDG three, which is about health. Now also you can see in topic one that there is energy power production consumption and so on. So, using these key phrases, we give a good indication to the topic model to create a good topics. So, all of these outputs are examined by a human. And thus, we have a human in the loop. We have examined all the venues and the key phrases that were extracted by MAC and Crossref. There we decided that the venues did not provide any information. We could not use venues to assign a specific SDG category to articles. We just stick to titles and abstracts and the key phrases that were extracted. So we through this curation and expanded vocabulary was created from the key phrases and then the human curator also examine the topics extracted by topic modeling and assigned a label and SDG label, which one of these 34 topics. Also, as I said earlier, we kept from these collections 90% of the articles for training and 10% for development, and we trained to deep learning classifiers that utilize birth. We have one birth model with attention that tries to identify words in a document that can be explicitly used to classify the document in one category, and we have a birth classifier without attention. So, using these deep learning models and a guided topic model, which is our final guided topic model that combines all collections from all SDG collections. We created three models in general. These models are used in a pipeline, and the results of each one of them is combined into an ensemble mechanism. So, overall, our data were created using the unbiased source we collected and cleaned all the data. And then we assigned its unbiased term to an SDG category, thus we created a control vocabulary version one. We used this vocabulary to collect articles and with through key term extraction expanded this vocabulary and created vocabulary v2, and then we created the version three of the vocabulary. Thus, we have created the silver corpus from documents that we believe that should be assigned the SDG category from their corresponding key phrases. Overall, the methodology can be seen in this slide, we have keyword marching, then in step two, we have the training data collection, then we apply topic modeling, then we train on deep learning models. We apply these deep learning models and the topic models to infer the SDG categories of the publications, and then we give these publications to humans for evaluation. Then after this evaluation the second cycle begins in order to create to identify better collections and create better topics and better deep learning models and so on. We have done a small evaluation using some people from the United Nations. They provided us some PDF files with 170 meeting records and we extracted speeches from these records on these files, which are paragraphs where a human talks about a specific topic. This could be either SDG related or not. Using our classifiers, we classified these paragraphs, and we gave to the UN people a form, a Google form with these data to apply their evaluation. We can see on the right part of the slide that in 80.6% of the times the human evaluators agreed with what the system decided. And this was all for me. Okay, thank you Dimitris. So the key point that take away message from the SDG classification system is that it is also an incremental system. And what we actually measure is the relevance of the publications and the scientific artifacts related to SDG's sustainable development goals. It is the essence and the point to emphasize here is that we measure relevance. At the next step, we will, we are actually working on that. At the next step, the interest is on the intended impact, to what extent a specific scientific work contributes to a specific sustainable development target to a specific goal. And this contribution is in the sense of an intended impact. And later on at the third step, the idea is to try to see and track the monitor to what extent scientific developments had an actual impact on those sustainable development goals. So it is an incremental in the sense that we go from relevance to intended impact to actual impact of scientific research in as it is stated in scientific literature. Both services, as I said, are hosted and integrated loosely into OpenAir and they were exposed through an API to OpenAir Explorer. And Konstantina will guide us now how to make use of both services as a net user. Konstantina. Hello from it. Let me share my screen. Can you see my screen now? Yes, yes. Hi, I'm Konstantina Galloni. I'm working for OpenAir. And today I'm going to to present you OpenAir Explorer, which is a portal for discovering research and we will focus on the sustainable development goals and fields of science classifications. The OpenAir Explorer is built on top of the OpenAir Research Graph, which is one of the largest open scholarly records record collections worldwide. We are also embraced the need to map research products with these two classifications we mentioned. So we integrated them into the OpenAir Research Graph. Okay. So, if you can see in my screen, right after in the first page right after the search form, we're promoting, I'm sorry. We're promoting these two classifications with two links for some special pages. Sorry, sorry. These two classifications are able to help us to view contributions of research towards complex challenges of humanity. We will see now the specific page about fields of science. We have a specific page for that where we were showing all the available levels of this classification on the left side of the page we can see a navigation menu of the first level of them. And we are also have added a search form so that the users can search among all levels, and even with keyword or with the specific field of science on the, if someone searches for a keyword, this keyword is highlighted. Okay, sorry. It's just a small window that doesn't help. Yes, the keyword is highlighted, we can search among all the levels and this level is clickable and leads to the search page where this field of science is used as a filter in order to narrow down the results. About the Sustainable Development Goals, we have also a specific page where we are promoting them. And here we are presenting each goal with the card where we were showing the number of research products in open air that is related to the specific goal. The card is also clickable and leads to the search page again as a filter. Open Air Explorer of course is not only about these two classifications, it's a representation of Open Air Graph. So we are allowed to search among research outcomes and reach the graph with three basic functionalities. The first functionality is search, where users can search and filter among a big variety of research outcomes, organizations, data sources and projects. And they can also download research results and some reports about them. The second functionality is the link functionality where users can link research products with other research products within Open Air or with external results coming for example from Crossref or Orkid. And this way they help to enhance the information of the graph. The third functionality is deposit where users can search and find repository or journal where they can deposit their research in open access and it is an easy access point for these repositories. The search functionality now, we have specific search pages where we have with simple and advanced search forms depending on the search needs. And for each entity we have search pages specifically. The search can be done with a keyword or by using persistent identifiers of scholarly works like DOIs, sorry, DOIs, PMC, PMID, handle identifiers. And we are searching for research products, projects, data sources, organizations. We have on the left side filters. We're promoting of course open access and the response is customizable. We can, users can sort by relevance or date and they can choose of how many results they want to see in each page like five or 10 or 50. Users can download these search results. In the filters, in the filters column, we have a specific filters for fields of science and sustainable development goals with which help to narrow down the search for these specific fields. When we click on the view more functionality, we see all of them, all of the available fields of science or SDGs and we are able to search to find one easily or sort by their name or results number. The first six, which are the with most results related are with our viewed immediately without clicking on view more. The advanced search form now helps to do to do helps users to do more complex queries. On the advanced search form, we have many fields where users can use to search. And of course we have added there to fields of science and SDGs. Some complex queries, for example, our fields of science is not physical science and a sustainable development goal is climate action or something else, then clicking on the search. We can see that we have added three rules and we will see the results, just like in the in the simple search form. This is how search results are displayed. One result is after the other, and on top we have how many results we have for the search, how many pages, and here are the selected filters we have. These are the options we mentioned earlier, research results per page and sort by option. Okay, and on the on each result, we are promoting open access with the green color. Okay, also we on the bottom of each card of search results. We have we are a promoting some indicators from our integration with be finder. And there's also the option to add this result to the users or kid record. And this is a more detailed view of when we hover on these impact factors how they are presented when clicking on the title of its research result. So we are going to the detailed page, which is tailored for each entity, which means we have information specific, depending on if we were seeing publications or research data or projects organizations data sources. This is an overview, and of all the metadata, the graph has about this specific research outcome. We can see that on on the bottom, we have many tabs for the relationships between these research outcome research product with other research products with links to other research products within the graph. So we are offering some statistics and metrics, we're offering actions, for example, sharing social media site this article at work ID. And on the right side, we have some more information between them we have the SDGs and the fields of science related to this research that research outcome. We are sending two of them. And if there are more, we can click on view or link and the left column, sorry, the right column changes the each view in order to show all of them available. And they are also clickable in order to be able to search with them and and be used as filters in the search page. And we also have the feedback button in order to promote feedback from users, which is more than welcome. So on the feedback button, we are going to feedback form, we can add feedback for for many of the metadata of this record, but if we click for specifically for the fields of science or sustainable development calls. So this, this field is pre selected. So we can instantly add our comment here like if this is this field of science or SDG is wrong if it if something is missing, or if something additional needs to be added. So this preview of the list of all the available fields for the metadata of this research outcome. We also mentioned that we have the add to our kid functionality, which is accessible through the research results page, the search page or the detailed page of this result. And by clicking off this on this, we are granting opener to access and update the orchid record and works, and then we can easily add, we are doing this only once. And then we can easily add the research this research product in our orchid record where we can see it immediately. We are also offering the my orchid links page where where we can summarize our links added to to the orchid record via a opener explore this this functionality. This functionality is a bit new, and it's it's integrated with the search and link wizard. The second functionality we mentioned is linking in linking. We can. Sorry. Allow users to to enhance the graph with relationships, and we can link research outcome with other research outcomes, as we said, either from open air inside from open air or via from crossref data site or kids which are external sources. And we can link it with other research outcomes or with projects we all with communities. Here is the list on the second step where we are linking this specific research product we have selected with the other three. The third functionality now is an easy helps. We have the get started the button and in order to start searching and find the repository or journal journal we we wish to deposit our research. So on the first page we are also have added some details of how this can be completed, how this can be done. And on the second step we, we are selecting if we want to search in open air for the repository we, we need, or if we want to deposit directly to this to genado, which is the default repository for open air. In the deposit page search page, we can help users to find and deposit their research and go to the repository so this is an easy access point to the repository in order to deposit. So the future steps now we have already planned some improvements in search user experience and also the search results that we were presenting in a keyword search, and also the response time. We have also planned some improve improvements and enhancements, depending for for impact factors. For example, for the be finder integration we mentioned before, and we are going to add more levels in fields of science and do some improvements in the SDG classifications. So the SDG classification specifically, we are doing currently some checks, some improvements and some revisions that will be available very very soon. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. So, as I mentioned, both services are in beta. So try to stay tuned and keep up with the developments as I will be major editions updates in the coming weeks. More services will be added, more signable services will be added to OpenAir Explorer. We have been using these services in several use cases in several areas. OpenAir Explorer, the target group is the end users, the researchers, the labs, the universities and the institutions that can take advantage and discover other research, other publications, but have value and contributed to SDGs, for example, in their own field. For European Union, we have been using this framework in order to monitor, as a flagging mechanism actually in order to monitor and track scientific developments in various cross cutting issues like interdisciplinarity, like the contribution of social sciences, more of a sustainable biodiversity and development. This was a cross cutting issue in H 2020. And the way we did it is by using these services, we can come up with some good indicators, some good KPIs of how to monitor the developments in those areas, in those cross cutting issues. I think this is all that we can start the discussion. Paula, the floor is yours, back to you. Yes, Harris. Thank you so much and thank you all for this very detailed presentation and how this FOS and SGTs are being integrated in OpenAir research graph and then all the users can view and access and search via the Discovery Portal and we also yesterday saw some, just a slide of this of how it is being used in other services like the Connect, the Community Gateway, and we saw even an example from the Aurora Consortium, the University Consortium is using these SGTs in the Community Gateway, the Connect made for them within collaboration between Aurora and OpenAir. So, I don't know if you have from the audience, if there are some questions, comments, some doubts that maybe you have. So, feel free to open your mic and address some questions to the speakers. Hi everyone from Emilia from Portugal. It seems a great work. It wasn't clear for me how can we extract a collection of publications according to SGTs coming from a specific source like a repository. So I don't know if which one would like to answer. So let me answer this question. Okay, so we have a whole story like the Microsoft academic graph and Microsoft academic graph contains metadata for each article, such as title and abstract. Given that we have an SDG specific vocabulary, so for each SDG category we have, let's say, 100 phrases. We use these phrases and query this database, this repository. And when a key phrase is matched either in the title or the abstract, then we collect this document and put it in the collection of the specific SDG category. So, the first collection would be each and every article that contains key phrases from the SDG vocabulary. Did I answer your question. Emilia, I don't know if it helps or if you want to open your microphone. Exactly. I think that also Costantina can also add to this. So I don't know if there is any functionality where in the OpenAir Explorer service portal where you can filter out very short what Dimitris mentioned, according to a specific repository. So I want the SDG related publications that come from a specific repository. That was the question. Yes. Yes, thank you. Okay, exactly. This is what I was going to add. Yes. Of course we have the simple search and the advanced search form. The advanced search form of course allows more complex queries so we can add there that we would want to results for this specific SDG. The provider is the specific one. So we have a subset of them specifically for this source. Thank you. Do you want to say something because I think I think she tried already but she couldn't find her repository from her university. Was that something? In fact, I've tried but not now, not in this session. Before I was trying and I didn't find the specific repository but I will do it again and maybe that's okay. Thank you. Thank you Emilia. Thank you. If you need something specific that you cannot find it, please contact us. I think it's related because when you make a search on explore, you will only see the limited number of the highest core repositories or the high numbers. I don't know if extracting the information or emailing you if Emilia can have the specific information related to her repository or if using or the monitor the monitor dashboard for institutional if this is a way to have also this kind of information. Yes, on the simple search page on the filters column. We have the repositories but they're only the top 100. So if we use the advanced search form we can search for like a keyword for with the name of this repository and you can find it. The other way is to go to the search data sources page, find the specific. Maybe I don't know if you can share your screen and show instead of maybe showing it will help. I don't know if you can do it. Yes, just give me a moment. Thank you. You can see the opener explore. So if we go to the advanced search page. We can select the SDG is climate action for example. And collected from data source is, I don't know which one should be let's say the university or vulgar. Okay, I've not served. I just chose one. So if we go there, we can see the results. Another way is to go to the search data sources. Search for. Okay, sorry. Find it. This one must be. Okay, let's, let's take this one. And if there are. Okay. Yes, if there are publications, we can find it from here. We were also seeing a small subset of them. So if we go to view all, we are going directly to the search page, and we can add rule for these days when we want. I'm not sure if discovered your question. Yes, thank you. Thank you. I think it helped. Thank you so much. Is there any more questions or comments that anyone from the audience would like to raise now. Just a few minutes. So, feel free.