 So good morning and welcome to another OpenAir webinar this time dedicated to OpenAir Explore service. Before we start, let's go through some housekeeping rules. So as you already see this event is being recorded, we ask all the participants to put your microphones off during the presentation. If you want to participate, you can use the chat or the Q&A area to introduce yourself at the Q&A to address questions to the speakers. At the end, you can also open your microphone or raise your hand to address questions or talk to the speaker. The presentation and the recording will be shared with you by mail but will also be available in the OpenAir portal in the webinar area. And do share this webinar using these hashtags and tag us on Twitter or LinkedIn or even on Facebook as we have a group there also. Now let's proceed with our webinar presenting OpenAir Explore, an open research discovery portal providing access to millions of interlinked scholarly works and their related research products, and also contextual information such as organizations and grants. The Explore portal is built on top of OpenAir Graph, one of the largest scholarly record collections with integrated content for more than 12,000 data sources worldwide. But to better explain and show us all these features and new developments we have here, Konstantina Galoni, the product manager of this service. So thank you Konstantina for being here with us today and for accepting this invitation. And now the floor is yours. At the end we'll have a Q&A or session for you to raise your questions. So thank you Konstantina. I will stop sharing. Thank you very much Paola. Hello all, I'm Konstantina Galoni, I will share my screen now. Can you see my screen? Yes, all good. Great. Hello, thank you very much for being here. Today we will be presenting OpenAir Explore and we will focus on the improvements and the new information available. As Paola mentioned, OpenAir Explore is built on top of the OpenAir Graph and is a way to access and explore all the OpenAir Graph contents, the entities and the relationships of the graph. These are parts of the home page where we can see all the numbers that Paola mentioned and a small intro for the graph. OpenAir Explore has three basic functionalities, the search functionality, the link functionality and the deposit. On the search functionality, users are able to search among all the OpenAir Graph content and filter and use filters on some of the metadata of the records of the graph to narrow down some search results. For each result, users can see an overview of the information available for each record. They can download the search results and they can also download some reports on the relationships. We have integrated statistics, metrics and impact-based indicators and they can see them through the search pages and the detailed pages. The link functionality is the way to enrich the graph by linking OpenAir entities with other OpenAir entities or OpenAir entities with external results. We have also a useful integration with Orkid, which is supported by the Orkid Search and Link Wizard. The deposit is a way to find a repository or journal where you can deposit your research. Let's see an overview of the improvements and the new information we will be discussing later in more detail. We have improvements in the search pages, which is we have improved the search form, the filters and the search results. And we have also some new information and new functionalities, which we have added actions for each result in the search pages. And we have also usage counts, metrics and impact-based indicators currently for research products and soon they will be available for other entities too. For the detailed pages, we have made a redesign in order to better display and also highlight the impact-based indicators. And we have also a new interface to suggest missing fields of science or STGs, sustainable development goals. We have also a better way to, an easier way actually, to search on the OpenAir content, on the graph content through the menu. On the linking, we have small improvements on the direct linking flow where we can go from the detailed pages. And we also have made a redesign of the search results and the detailed pages for the mobile screens. This has been done, but we are still improving and adding some content. Most of these improvements have been thoroughly user tested. And we specifically focused on the search results and the detailed pages. The usability user tests were made to evaluate the usability of the search and landing pages, the detailed pages, and gather feedback from users to improve the explore service. We had 10 participants which were researchers and user interface experts. Most of them had a technological background. We covered all the levels of experience with research platforms with two beginners, four intermediates, and four advanced users. We had recorded one-to-one meetings and we got, we also used a feedback form at the end of its test to gather the participants' feedback. The methodology we followed was to study, was that on the beta version of the OpenAir Explorer which was updated with the new version and included the new landing page for results and a hybrid version of the search page from the past and how it is now. A mock-up version for more changes on the filters was presented to compare what was preferred. And participants were asked to perform tasks related to finding and exploring the content. Let's start with seeing in more detail what we have done and what are the changes. For the search pages we have simplified the search forms and also unified all the research pages. This means that we no longer have two steps in the search form and the user does not need to select a specific entity to search. They can easily use the simple keyword search and they can also, within the unification of the search pages, users always see all the four available entities, which are research products, projects, data sources and organizations, and they can easily and fast navigate through all the model. The filters, sorry, the type of the research, the different types of the research products, which as publications, research data, research software and other research products, were moved from a top part that used to be here on the left side with the filters for better group together with the document type. The document type filter appears when a user selects at least one of the main types of a research product. This was made to better group the type concept and gain more space in order to make all the results more visible to the users. We have made a redesign of the search results. We removed the cards we used to have and the users, when they hover on one result, a light blue background covers this result in order to be easily readable and focus on the one that they want to read. There is a more compact way of exposing metadata. This is just right after the title. Each of these information has a tooltip and explanation, which means that when we hover on it, we can have what this information is about. We also have the ability to view all authors of a research product from the search page. For each research result, we have at the bottom of this, let's say, card aligned with the actions that previously were accessible only from the detailed page. This allows users to share, for example, on the social media, the result immediately, site publication, claim to orchid this research product, embed the research results to their own website, view all the available sources. This is an easier way to do more without clicking on a specific detailed page. On the same line at the bottom, on the right side, we are exposing some new indicators and usage counts, which are the usage count indicators and the impact-based indicators, which are citations, popularity, influence and impulse. And when a user hovers on each one of them, a new pop-up window appears to show in more detail what are the values and where we got these values from. For the impact-based indicators, we have an integration with BIP Finder and for the usage count indicators, we have an integration with the usage count service. We have also done a better filter grouping, especially on the project search page. From the user test, users had difficulty in setting the end-year, so we reordered those filters and grouped them all together if they are date-related. For the detailed pages now, this is the overview of a detailed page. As you may already know, each detailed page is tailored depending on the entity selected, which means we have different detailed pages for research products, projects, data sources and organizations. The main content of the page has not changed much, but we have moved on the top of the page the Actions bar, where the Add to Orkid button has been renamed to Claim Orkid. There is a more compact way of exposing the metadata again here, just like in the search pages, and we have also added a dedicated tab to subjects to emphasize them because they seem to be a very important information for the researchers. On the top right of the side, on the right column, we have a blue box with the usage count metrics and the impact-based indicators, and we can see an overview of them. On the tabs, we have a dedicated metrics tab, where we can click on it, if we click on the blue box, each of the citations or popularity or any name, a pop-up window appears to explain better what this is about and where we get it from. When clicking on the metrics tab, we first have an overview of all the available usage counts, impact-based indicators, and we also have here the alt metrics. And we have also three more tabs, each one for each category, for each integration, to show in more detail those numbers and some charts available. We have, for example, a chart of the impact-based indicators from the Finder, and for usage count indicators, the monthly views, and the monthly downloads. For the fields of science, we have removed the numbers from the labels because they are long and very confusing for the users. And when users click on the, when there are more than three, two or three fields of science or sustainable development goals, users can click on the View and Suggest button. The right box shows all of them, and they can also use the Suggest button to open the new interface and the new way for just suggesting fields of science and sustainable development goals among the available lists that we have on the graph right now. This feedback is very important for us to better check all the content, and if we're missing something. We wanted to allow the users to be able to find an easier way and quicker way to search on the OpenERGraph content from the detailed pages. This is why we added there on the menu this icon, that is the Search icon, where if we click it, the menu options are hidden and the Search form appears in order to write a keyword and then click on Search and move to the Search pages immediately. On the link functionality, for the linking we have two flows. The first one is by clicking the link from the menu. This is a generic linking functionality where users can search among the graph and external sources such as Crossref for data site to find research products and link them with other research products, projects or communities. We have made some improvements on the other flow, which is from the detailed pages, if we click on... Sorry, Konstantina, the slide is stopped, so we only see... Yes, the direct linking functionality is the one where if we click from the detailed page on the link action button, we want to link the specific research product with some other research products, projects or communities. Then we removed the stepper from the top of the page because the first step has already been made. We have added a link to be able to go back to the publication, for example, or any research product we got from. And we have updated the basket or the right column to always show the source on top. We have also made redesign for the mobile, especially for the search results and the detailed pages. This is a completely new view we were working on, especially for small screens. This is an overview of the search results. I think we can see that users are able to better see now all the metadata and the access mode, for example, if it is open access. And they are also able to see the usage counts metrics and the impact-based indicators, along with pop-up we already discussed. The filters button is a floating button always on display on the bottom right of the page. And the screen changes when we click on it to view all the available filters. For the detailed pages, we have also made some improvements in order to better facilitate the mobile screens. We start with the main information and overview of the entity we are in. For example, here it is publication we can see on the bottom left. We have main information and tabs here. When we click the screen changes and we see this particular information which we can close with this X on top. And if we want to see something in more detail that does not fit this screen, we can go back with this arrow or completely close the window and view the first screen with this X. Those three options at the bottom of the page are focusing on the main information and the main actions that the users have in a detailed page. The second one is for metrics and there is a way to view again the usage counts indicators and the impact-based metrics and the impact-based indicators and also the altmetrics buds. Sorry, is someone talking? Okay. The third one is the actions. The third option at the bottom is the actions where we can see all the available sources. We can see also the versions. We can share this product, this research result to the social media site, add the claimed work, or use the linking functionality. For the next steps, we were focusing both on content and on some improvements on the user interface. We are planning to design new levels for fields of science and sustainable development goals. We are on the process of adding direct PDF links in the search products so that the users can directly access the sources and the whole record. We are also planning to add in the search pages, in the search projects, the usage counts metrics and impact-based indicators. We are also planning to add in the search data sources page the usage counts metrics. We are already doing more improvements on the redesigned pages for the desktop views and the mobile views. For the mobile specifically, we are going to redesign all the other pages as well. For organizations, we want to display and search by persistent identifiers, such as, for example, ROR. I think this is it from me. I think I was quick enough. Thank you very much and I'm here for your questions. Thank you. I know that the portal is having some issues, but as you told us, it hasn't affected the explore service. So I don't know if you want to go through some of the features and do a short demo with a service if that is possible. This is possible. Of course, I think I covered all the functionalities with the slides, but we can have a quick view of how it is right now. Thank you. We start from the home page where we see the simplified search form. When we click on that, we see that we have all the entities over here. And we can see that even if I go from the menu, for example, to the search project page, I'm still in the same page, but with the selected projects tab. This means that in every search page, I can always see all the available entities of the graph. It helps to, for example, let's say, to easily search on something and if I don't want to see no more organizations, I can easily go to the project tab and navigate through the projects. I forgot probably to mention that we have also added here the keyword so that the users are immediately aware of what they searched for, even if I go down to the page. For the research products specifically, we can see here that the types are moved, the type of the research products were moved here. And if I click, for example, the research software, we can see that we have one document type, which is software. If I had clicked some others too, we can see that we have more document types now, because the results are either research data or research software. Let's check for, okay, let me go to a specific data page. We can see that on top we have the action bar. We have the information here. I'm sorry, I will go back to show you the action bar that I forgot to show you here. We see that when I hover on the usage counts or the impact-based indicators, we can see this pop-up window. We can see that if I click here, we have all the sources available. If I hover over here, I can see that Springer Science and Business Media LLC is the publisher. And for example, that these are the projects. In the detailed page, the action bar is on top. We have the blue box with the metrics on the right side of the page. We have the tab over here for the metrics. And for all the available categories, we have a specific tab which expose all the dedicated information. And we also have the subjects tab that shows in particular the keywords or the subjects by vocabularies. For the Frids of Science, we have the View All and Suggest option. We can see here all the available Frids of Science that are related for this specific publication. And we can click on the Suggest button to view the whole list of the Frids of Science available on the graph and select some of them. Click on the next button. And then if someone wants, they can add their email to get some feedback. This is not necessary. We should check on the, I'm not a robot, a robot, a recapture, and then click on the send feedback. I won't press it now because I didn't press any real data. For the linking, I need to login first. I'm not signed in. We can see that the generic flow I mentioned shows this stepper bar on top where we can search for some research products. Check on them and then move to the second step to search, for example, a project here. As you can see, this basket box has two tabs, one for the sources and one for the links. But if we go through the landing page, from the landing page and click on the link to action button, no stepper appears here because the first step has already been done. The source is pre-selected and it is the one I got in from. And we can go back to this publication we were before. I don't know if I have anything else to show. I can also. We have some questions if you want to address them. And thank you for this short demonstration that you've made. We have here in the Q&A area, Lauren Schmidt, our criteria like popularity, influence or impulse were assigned. Okay, this is an integration from the BIP Finder service. This is another service from the open air. We have integrated those values in the graph and we get it from the graph. How they are explicitly calculated. Unfortunately, I am not the right person to answer because it's another team's work. But you can contact us and we can get in touch in more detail with the proper team. Yeah, we have a service help desk at openair.eu for you to if you want to address us some doubts or questions related to open air services. So feel free, I will put it right in the chat, the help desk email if you want. Yes, and they say that Alessia also shared in the chat the page of this tool. Thank you very much. Thank you Alessia. Yes, there you can find information about the indicators and also the publications that explain the process and the methods that were used for their calculation. This link can be also found by Explorer when clicking on, for example, citation and we say powered by BIP Finder, clicking on it will get you to the BIP tool page. We have here another question on Explorer website. The punchline is the discover open linked research. Are these links between research, data and publication? If yes, are they based on DOI? Sorry, I can, I should read it. No, I'm sorry if you want to read it. No, no, it's okay. Okay. Yes, we have, we have multiple linked data. This is a basic work of the open air graph. We apply multiple algorithms to the duplicate some of the data and find the links between them. Again, this is a matter of the graph process. And I'm not exactly sure what, what to answer. Alessia for sure knows better but DOI is definitely one of the fields of the metadata that are very important for this linking process. Yes, maybe I can add a couple of words, if you want. Thank you. So there are several ways for a link between research products can enter the graph. It can be harvested directly from the repositories, the sources that we have. And in this case, yes, the presence of a persistent identifiers which can be a DOI but also a PubMed identifiers. So every type of persistent identifiers are used in order to map these links between publications and data to software and etc. But we also have mining algorithms that look into the full text of open access publications. And in that case, what we try to find is, yes, the DOI is in some cases but also for example to create the links between the articles and the software. For the GitHub link or for links to other software repositories. And these are found in the footnotes of the articles in the text of the of the method sections of the articles, and so on and so forth. So there are different approaches based on the kind of entities that we want to link. Thank you. Thank you so much. There's another question here from me and Bremen said the new developments look great. Very well done. What should I do to correct the very limited results in search when I filter by my organization or by data source. My institutional repository. This is a content question. Again, we ever is responsible for the specific organization and the data source should probably meet the respective team from provide portal and probably add the organization or the data source to the graph according to our guidelines. Paula, I don't know. Would you like to answer or something? I'm not sure. Do you want me to? I don't know if you have anything to add. Sorry, I thought you wanted to say something. Sorry for that. No, no, no, no, no, no. Sorry. Sorry. I was I was reading another question because I didn't think to watch what we're saying. Sorry. But you can access the the opener provide portal and find there some instructions on how to register for example the data source to be compatible with the opener guidelines so that the opener graph can harvest data from there. Yes. This is also here so maybe you can also you can also help with this but but yes if you have some issues with your repository or institutional repository. May you have to send us any mile and the provide team may may give you some some help. And now we have here one other one other question. What are the research community filters? Where do they come from? Yes, I'm very happy for this question. Because this is how we created a connection between the opener Explorer and the open air connect service. So with open air connect. Currently we are providing portals that are dedicated to specific research communities. So for example we have the one about new informatics. We have the research community for Daria and for for many many others. And we want to show that we have these dedicated portals for the communities. And this is why you can see the filter in the search result list on the on the left. When you open a landing page of a research product, you will also find there the link to different communities. This research product is relevant for. Clearly we don't do it just because we want to serve a community because we are engaged with them. So we have an agreement with them and they provide us the criteria to identify among the opener graph, the products that are relevant for them. Maybe if I can add it's also a matter of trust. So if you trust this kind of communities. For instance, Daria, because you know the research infrastructure, and you have some product that is linked or recommended to this research community can be also helpful for you as a researcher to maybe even increase the trust on the sources. Thank you. Thank you. Thank you, Julia, also. And if you want to open your camera or address directly the question, please feel free to do so. And thank you, Alessi for for answering for answering this question. We have a year another one from anonymous participant. What's the impulse metric based on the impulse space, the impulse metric is again based on the big finder tool. You can find more information on the site that previously Alessi shared with us in the chat. I don't know, more to say, unfortunately, because it's a content question and I'm not more I'm more on the top of it. I have more questions here but I can go through this one. Hello, it seems the metrics are not available for all research products. Does it depend on the source, or on the VIP services. It's definitely the VIP finder service. They have some criteria of calculating those metrics, those indicators, and I don't know what exactly are these criteria, but you can contact us or maybe find some more information on their page. Thank you. And about the filters of the research data type. Can you tell us more about that or? I know that they are subcategories of the research data. They are based on the content of the specific research data. I am not sure I can check which are they actually because I don't remember by heart. Give me one second. Yes, I can say it's bio entity data set, image audio visual clinical trial, and others. I think it depends on the source that we got this information from and probably on mining algorithms that I found according to the content what this is about. It's about narrowing down more the search. Okay. Another one in terms of assigning new fields of science or sustainable development goals to a new publication. Who does that? Is that controlled? Yes, as soon as you use our form to send us your suggestions and feedback, we get an email that has your suggestions, and this is controlled by human people and specifically from the team that is building them and curates them. And this says the design of a team, a partner from Athena Research Center that is a partner of Opener. Now we have here a specific question more related to a researcher with a certain orchid ID who says it has more than 300 publications, but here it only appears 18. So maybe he's doing something wrong. So if you have some suggestions to improve for. You can check the specific author and check if indeed the results, the number of the results is the same as mentioned. You can use the advanced search page to specify the author by his orchid ID. Indeed we have 18 research products. The fact that they are 18 despite the user may have more is because we may not have the association that this user with this orchid ID has this specific, is among the authors of this specific research product, or probably where he has deposited them, they have deposited their research. We don't harvest from there. What the user can check if, for example, you are one of them. Find, try to find in the graph, this specific research product, and if you are not, if your orchid ID is not among the authors, and it is just your name, you can claim it to orchid. See it immediately in your orchid record page. And as soon as the graph synchronizes the data with the orchid data, the green orchid icon will appear next to your name. If these works are of another user, you can not do much just to probably suggest as which are the repositories that this research was deposited so that we can also harvest from there, if possible. Thank you. I just left the email for the help desk for for the ones who are still here. And thank you for for for answering all all these questions we have a last one, Konstantina. What is the relevance criteria. It is the default results page criteria. Yes, it is the default results page. It is based on how we have configured the backend service, the graph that serves the backend service that serves and gives the contents of the graph. I don't know if we have relevance criteria, we have a date criteria. It's a matter of sorting. And if someone adds a keyword in the search form, we have those some some pre specified fields where we search on them. And we usually mentioned them on the on the placeholder of the search form. For example, if you are searching of the title and the authors list. We were saying that search the name or the authors of this research product, for example, if you want a more specific query and you can you can specify you can use the advanced search form where you can specify exactly which fields will be queried in order to find the relevant data. Thank you. Thank you so much, Konstantina, for the detail detail information that you share with us and to to have to have answered all the these questions so I don't know if anyone has more questions or doubts to address to Konstantina, I see that Andre, and there is also reply to the colleague so related to that specific repository so and I've shared also the email for the uptask service of open air. And I also shared a link for you to send us your opinion about about this webinar and I'll share it here with you again it's very important for us to have your opinion, and to tell you that the recording and the presentation will also be made available in YouTube and on Zenodo and on the webinar pages. I also will like to leave here another link for the support area of open air. Here you have some materials and resources that are very useful, not on not only related to the services, but RDM and open science related issues also. So, you can find it there, fact sheets, webinars, guides, and other and other support materials that may help you in your research activities. By my side. Thank you, Konstantina, and thank you all for for being here, and we'll soon, maybe we see you in future open air events so thank you so much. For my side I would like to thank you all as well very much for being here. I would like also to tell you that I mentioned the user testing process that took place a couple of months ago, but if any of you would like also to be in touch and probably also participate in a future user testing, we will be happy to contact us and tell us that you want to be one of the people. Because your feedback is very important for us, we would like to hear from you and be in touch. Thank you very much. Thank you, Konstantina. Thank you all. And I will leave the session open for a few more seconds for you to copy some links if you want and then I will close it. Thank you, Konstantina. Thank you all. Have a nice week. Goodbye.