 So today, we have Dimitris Piaracos from the Matinar Research Center from Greece, that is the service manager of the Open Aerospace Service, that I would like to thank Derevelivity to be here today to present the service. So Dimitris, go for it. Thanks, Andrei. Okay, let me share my screen. Okay, can you share my screen? Okay, so I can start. Good morning. Thanks, Andrei, for the introduction. And I'd like to thank you all for joining this series of OpenAir webinars and this usage count seven as in particular. A few more details about me. I have a Bachelor of Physics and an MSc in PhD in computer science and since 2014. I work in OpenAir project as a member of a technical team. And as Dere mentioned, I'm the developer and the administrator or product manager of the OpenAir usage counts service. The usage count service is the service developed in OpenAir in order to analyze the usage activity in commas providers participating in OpenAir and build user statistics. User statistics provide that kind of alternative way to measure the impact of the research and complement traditional scholarly impact indicators. But I will dive into more details during this presentation. So another view of this presentation. First, we will present this introduction to the concept of user statistics, why they are important, as well as what are the available standards for acquiring and exchanging the user statistics. Then we will present the usage count service operation in terms of architecture and engagement with the service. Some initial lab words that show how the service has been deployed. An interesting part of, at least for me, for this presentation, and I hope for you also, will be the future developments. We'll discuss and present the updates in the user statistics concepts after the employment of some form of a new standard. And finally, we will proceed with a demo, I hope without surprises, that we will try to show how the service outcomes have been deployed and how they could be consumed by various interfaces, either inside OpenAir or from conder providers in particular. So first of all, user statistics. What are user statistics and why do we bother collecting, building and working with them? User statistics are based on the processing of usage events. And usage events are triggered by accessing the digital objects in content providers, for example, repositories, journals, and crisis. So digital objects are articles, data sets, thesis books, etc. And once triggered, a user event could be either a metadata view of the digital object, for example, a visit of the metadata page of an article, or a full text download. User statistics, why are we using them? Why are we exploiting them? User statistics are important not only to content providers, but also to a variety of stakeholders, like publishers, aggregators, or funders. For many content providers, user statistics are a required element of their regular and typical reporting to funding and accreditation agencies. User statistics can play an important role in driving a bibliographic institution, internal marketing, and the development of the provider's website. For example, for publishers, the number of full text requests for a given journal is a strong indicator of the impact of the journal. In terms of marketing, monitoring usage over time is a good way of determining the effectiveness of various content, and marketing or campaigning strategies that may be implemented. Finally, aggregators, like publishers with larger services, they require user statistics. They want user statistics not only to assist with collection management, but also to help with the overall management of their services. For example, again, monitoring full text retrieval over time helps with capacity planning for their infrastructures and allows the services to plan ahead for growth or even peaks of their services. More or less user statistics is a logical choice as another impact measure that could supplement other impact factors like citation counts. Don't forget that this is that readership information can be easily and somehow routinely collected. Another aspect that bibliometric indicators do not show the usage of published work by non-authors, for example, students, academics or non-academic users who do not usually publish, but may read scholarly publications. User statistics for academic publications may therefore give help to give a better understanding of the user patterns of documents, and can be most recent than other bibliometric indicators. This is a very important aspect of user statistics as a concept. On the data provider level, user statistics can serve repository managers or content or hosting institutions as a tool to evaluate the success of their publication platform. On the individual item level, on the other hand, this tool can demonstrate popular publications to authors and readers. In addition to other traditional measures like citation counts or alternative metrics, for example, mentions, recommendations, posts in social media, it can inform funding authorities in research evaluation processes. Finally, user statistics on the item level can reflect the relevance of a particular research output of research topics, of data sources over the course of time, and up to the present, so they can be an important indicator to analyze the trends for these items. So we discussed the importance of user statistics, but how about standards? Is there a way to have a common language to collect, exchange, and most importantly, compare user statistics among content providers? And here where the countercode of practice enters the scene. Counter, which stands for Counting Online Usage of Network Electronic Resources, is a non-profit organization which is supported by a global community of library, publisher, and vendor members. And all this contributes to the development of a set of directives that consist of the countercode of practice through working groups and outreach. So countercode of practice is some, more or less, the international standard used by librarians, publishers, and other code that provide them for reporting usage events for electronic resources in standardized ways, and I would like also for collecting and processing the usage events. So it's not only for publishing, but also for countercode of practice, provides the instructions, the directive for processing also the usage events that are collected. So counters that arise usage reports allow providers to compare usage across different publisher platforms, assess user activity, inform with new and new purchasing decisions, justify budget spends to the stakeholders, derive cost per user for content, and inform faculty power users about, or power users about the value and use of current content provider or library resources. There is a variety of services offering usage statistics. Iris, of course, is the national service for the UK, make that account that focuses on data sets, knowledge and lots, Elsevier for publishers, the referential that covers the South America, operas that mostly focuses on books and monographs, RAM, and N plus. So how about our service, the usage count service, this is the use statistics service of for open air research graph and explain this concept in the next slides. First of all, open air. Open air is a European project funded by the European Commission since 2008, and has led the shift to open scholarship in Europe and help alignment with the rest of the world. So open air more or less bridges the words where science occurs and science is published. In one hand, open air is a community driven organization on heart, which accomplishes a participatory infrastructure in European Union member states and associated countries. And this is an infrastructure is accompanied by a service driven approach which aims to support, accelerate and monitor open science. So open air offers services for any kind of for all stakeholders, researchers, scientists, corner providers, funders, institutions, etc. So one of the services that users come serves that we will be discussing this presentation. The main structure of open air information is the open air research graph comprised of interleaked scientific products with access rights information linked to funding information and sets communities. Open air graph provides a form of a kind of a model for the presentation of the formation and open air uses it to represent objects which are in the context of open air are called reset products and businesses products are presented in the scholarly communication domain and they also includes the relationships that exist among them. Among them. So the users counts service inside open air infrastructure users count service is one of the pillar services of open air and simply counts the usage of the graph entities. In other words, the usage of the research products across the network of open air content providers. So in one hand, we have the content providers that produce the content is like it's 200 repositories aggregators that that archives software repositories. This content is is enhanced by other functionalities of open air which operate on research products, for example, the duplication or mining, and then the users count service exploit all these enhanced information to provide a more holistic approach to users statistics and this will be discussed later. So how we do it and how the service operates. We have two different approaches for beating user statistics. The first approach is based on the push or what we call a push workflow, which is depicted in the upper part of the graph. And this approach is based on the on the matomo analytics platform. Following this approach, an institution repository registered in open air providers portal, which is the main entry for participating in the open air infrastructure. So after the registration, a server side tracking software is offered for this for the content provider in the form of either a plug in for for this space platform or patches for the ePrince platform. We are also offering a generic matomo or a generic tracker software, which is based on matomo and that covers all the other platform that are available. So all the plugins and patches to enter generic matomo tracker tracker are exploiting the matomo HTTP API. I'd like to mention that we do not use matomos native tracker, which is based on JavaScript, in order to avoid what we call in missing a direct request. So we don't want to miss a direct request. For example, if a request for a PDF in a content provider is coming by Google Scholar, matomo natives JavaScript tracker will not count while our server side software will properly recognize the request and track it accordingly. So usage activity is tracked and logged at matomo platform in real time. IP anonymization is also offered as an option to respect users' privacy. So keep in mind that IP anonymization is offered again at the server side. So if it is enabled, all IPs are transferred anonymized to matomo. We do not have the IPs anonymized inside matomo, but they are arrived anonymized if they are if this option is enabled. Also countercode of practice robust list is also used in when we are processing the usage of activity in order to remove non-legitimate traffic. So information is transferred offline to openers databases for statistical analysis that follows again the countercode of practice directed. And from this user statistic database statistics are deployed in openers portal like provide, explore and the recently open science monitor, or can be collected by API endpoints which are based on SushiLite protocol. So this is the first approach. The second approach we call it follows a pool form of a workflow. This is used for gathering of course all the data statistics reports from aggregation services. This is the case that we use that we have employed for Irish UK. And we are using protocols that are based on SushiLite or other REST API interfaces. We are also covering using this pool workflow. We are also covering open science journals using a dedicated plugin for OJS platforms. Again, statistics are stored to openers database for statistical analysis. They are integrated with statistics count collected from the other, from the push workflow. And then again deployed by openers interfaces, portals or can be retrieved by the SushiLite API. So just to recap, the main services for usage count. We are tracking of usage, row activity using the push workflow or we are collecting counter reports following the pool workflow. We are offering anonymization of IPs. We are enhancing this usage information with metadata duplication which enables accumulation of usage for the same set of outputs. This is the role of the metadata index that I mentioned in the previous slides. Statistics are produced and created using the counter code of practice in order to provide standard based form of use statistics. And then we are finally, as mentioned in the previous slides, we are producing indicators that complement other traditional alternative biometric indicators in order to provide a comprehensive and most importantly, a recent view of the impact of the academic resources. Uses counts is also a part of EOSC. All these features that are provided by usage counts are also available in EOSC where usage counts is a core service offering the account, what is known, the task of accounting for research products. So research products account service is able to aggregate either by push or pull usage indicators for different type of EOSC research products like data sets, articles, books, etc. And research products accounting is provided by the open-air usage account service. As a part of EOSC, user statistics will be collected and made available. Research product user statistics are integrated with EOSC research product catalog. And another important aspect is that enrichment of EOSC research catalog with user statistics indicator will be visible to end users and offered in the dump of the catalog when published in Zenondo. So together with all the metadata information, the EOSC research catalog which will be published in Zenondo will also include the user statistics that have been produced by our service. And usage counts is also a part of the virtual access metrics account system offered by EOSC. So engagement, how can you participate in the service? First you have to visit the provider's dashboard. And you have to register your data certificate. Either you are a literature repository, a data repository, journal or aggregator. The next step is you have the option to enable the metrics, the user statistics for selected data source. There are four different steps that need to be performed. Three on our side, on the participant sides and one in our side. On your side, you have to download the tracking code for your repository platform. Either for the space, for the prints, or if you have another platform, you can download the generic track. You have to configure it according to the instructions that are provided in the documentation or you can contact us for support. And finally, you have to deploy the tracking code in your repository platform. On our side, on the open side, we are going to, as soon as the repository enables the service, we are starting monitoring in this the repository in Matoma, we'll show you in demo later how this is realized. So we are continuously monitoring the service and when everything is the usage events that are coming to our platform and we are valid and we are validating the installation of the tracking code and inform the repository manager accordingly if everything is okay according to the recommendations. So for the software, the next step is to to configure the metrics. You have to download the patches for various versions of this space. We are now supporting this space four, five and six and soon we will support this page seven. We are offering any prints like in for version three and again, as mentioned, we have a Python script for all the other cases. So the configuration of the services is very easily, it can be easily done and you have the full support of us if it required. This is how all usage events are collected in area time. This is where Matomo is actually employed. So this is a view only for a usage count service administrator and we use it initially to validate the content provider and then to monitor the provider users activity. As mentioned, the description of the push for workflow, we are transferring these users activity. We are processing it using the counter code of practice. And then we build and store the user statistics in open open statistics database. From there, they are deployed to portals or retrieved by API endpoints, but we are continually monitoring the service. We do not use Matomo for statistics, but only for monitoring. So let's see some service number. We have collected information from more than 200 content providers. The majority of them are institution repositories. We have also covering journals, we have 20 journals, we have data repositories, publication repository aggregators, and we have the scholarly communication infrastructure, which is the open-air portal itself. We are also collecting information from open-air portal. We have already collected up till May 2022, 6 million research products. We have collected user statistics for 6 million research products. We have 247 million views and 732 million downloads. And of course, the number of downloads is greater due to the indirect requests that are covered, that are collected by our tracking software. By the end of this week, we will update the statistics up to June 2022. So you can also see the evolution for the last year of the number of views and downloads that we are collecting. And you can see that we are collecting more than 10 million views and 20 million downloads. As I mentioned, it's until May 2022. Finally, we are offering countercode of practice release for reports, which are supported. We are supporting five reports that you can use to retrieve countercode of practice compliant reports. I will show you a demo later of these kinds of reports and what you can actually get by using these reports. So far, this is the current status of the service. I'd like to move to the future and their future developments and the upcoming updates that will be, I think that will be of great importance for content providers that participate or want to participate in the service. So the upcoming updates, the implementation of countercode of practice new release, the release five. So all the concepts that we have discussed so far are following the countercode of practice release for views, downloads, countercode of practice, reports, etc. The new release somehow changes completely the way that users' events are collected, counted, and published. So I will try to explain what are the new metrics and the reports for this new release of counter. I will provide some examples and also a comparison between the release four, which is currently offered, and the upcoming release five. I will also try to discuss the new concept that they reduce like data types, access methods, etc. So first of all, metric types. We have to forget the traditional views and downloads and move to the new concepts of the new terms of release five, investigation and request. So in release five, every access, every usage event is considered an investigation. So investigation is tracked when a user performs any action in relation to a content item or a title. Either it's a metadata view, abstract view, an HTML full text, a PDF full text, a download, PDF, or an article preview. On the other hand, a request is specifically related to viewing or downloading full content item. It's more or less close to the concept of downloads in the release four version of the counter code of practice. So new metrics are also using the suffix unique and total to further categorize the metric types. So you have the unique item investigations, which count the unique article investigations and requests inside the user session. The total item investigations, which count the total number of times information is related to an article viewed, including all article full content views. The unique item request, which count unique article full content views in a given session regardless of its format. So for example, if a user views an article PDF and the HTML in the same session, this would only count as one. And the total item request, which counts all article full content views across all formats like HTML and PDF. This is more or less equivalent to release four downloads counts as measured in the previous slide. I will try to provide in a scenario, Susan, we have a user named Susan, which is researching the history of portal side you mean or repository. And from a list of search results, she opens three article abstracts. So the counts for the according to the code of practice at least five is three total and investigations and three unique item investigations. So after reading these abstracts, Susan downloads the PDFs for two of these articles. The counts change to five for total item investigations. We have three views and two downloads, three states for unique item investigations. But now we have two total item requests and two unique item requests. And this is a, I'm leaving this example in this slide. So for further references, you can see the changes and how this counter code of practice list five operates. So if we want to compare this four and five release five metrics, we could notice that downloads and total item requests are the same, are almost the same as you could see in the graph. While views and total item investigations are completely different metrics. So views and total item investigation cannot compare to views. So why do we use them? Why do we have this release, these changes in this release? So the counter code of practice list five metric type features. The total item requests are important for providers that more or less have full text content and they report a number of full text downloads or views. The total item investigation provides a big picture perspective of the total number of investigations and unique investigation the request are considered powerful metrics for identifying activities within unique items and titles. And they are considered most accurate for cost per user analysis. Have we discussed the new metrics? Let's move on the other concepts which are reduced in release five data type. It's a pretty self-explanatory that defines the general type of content being accessed or for which usage is being reported or calculated. For example, data type could be an article, a book, a segment, a collection, data base, data set, etc. Release five also introduces the access type concept which describes the nature of access control that was in place when the content item was accessed and the item which appeared in the journal title reporter one. This is a form of counter release five report and allows usage of open access content to be separated from content that requires a license. So we have control, open access delay, open access goal, etc. The access method is an attribute indicating whether the usage related to investigation request was generated either by a human which browses the repository or the library or whatever the content provider form is or by text data mining processes. And this attribute appears as an optional parameter and TDM usage is excluded from the standard general and book report. These are some details for the publication, for the production of the reports. Finally, release five also reduce the concept of the year of publication, which is the year of publication for the content item access. So if content is available in print and online format and the publication days of these two formats differ, the year of publication of the print will be used. And this item which appeared in journal title report as well as the book reports. So in terms of reports, release five has reduced number of reports. We have four master reports, 16 total versus the 24 reports that we have in the counter release four. Many of the special purpose reports that are seldom used are replaced with a small number of flexible generic reports. And all counter release four reports have either be renamed or eliminated in order to use the new version, the new release report options. We have more filters to select dates, content types, metric types, year of publication, license types, and more in various combinations. This is the list of counter release five reports, platform master reports, the platform usage, etc. In the context of open air, this is our initial implementation of the release five reports. We have developed two master reports for platform items, the platform master report which reports summarizing usage activity for the repository by month metric type and item type, and the platform usage and the institutional report, the platform master item report which is a report for items requested by month metric type, item type and repository. And we also developed the platform usage report which is a report summarizing usage activity for a repository by month, broken type, down by metric type. And we have also added a dataset report which is not included officially and has been introduced by make that account initiative for reporting user statistics for dataset explicitly. The report also uses the new metrics that have been introduced in the in the release five is a particular investigation request, but as I mentioned it's not included officially for at least for this current release in the counter code of practice directors. So as an example the filters that are allowed, if we want to get a platform master report, the difference is that we are now there at least four reports, there are no filters in the, if you have used already the release four reports, you can see that you will remember that there are no filters for the metric type which are now offered in the platform master reports. And I'm talking about the total item request or total investigation that could be selected. And this is an example of a platform master report following the counter code of practice release five and for the platform like the University of of Minio. And you can see the data type, the access method, the total item investigations and the request, etc. And finally this is the release five report attributes that could be used in order to generate the reports. Some of the yellow are the most important which have been used and will be offered in our reports. So before proceeding with RECAP, I would like to show you a demo. Can you see my screen again? Yes. You are seeing Matomo. Okay. So, okay. This is the main Matomo platform that we used for monitoring in real time. The repositories. So for example, if I select the the Umino and change, okay. So for today, okay, we can have the events in real time. You can see here that we have a metadata view for this particular item, the title of the item, and also a download. So this is, as I mentioned, this is only used for monitoring purposes or the initial validation of the repository. So every time a repository or a content provider is registered in the service, we are using this interface to check if everything is recorded properly. And later on, we will validate, we are validating the service. So you can see an overview of the reports. We can see this is the raw usage activity, again, as I mentioned. It's in real time. This user scouts opennet.eu is the main portal of the service. When you can see information for the service itself, you can see what is the user scouts service, why opennet user scouts service, what are the benefits for providers and consumers. We can have some features of the service. You can have some form of analytics. For example, you can see the worldwide results. We have more than 200 repositories. This is a number that has been, more than that, of course, it's more than 200 repositories. The views are 248 and 732 million downloads. You can also search for a country. For example, to Spain, you can see the number of the repositories for Spain. Okay, let's wait. And this is not normal. Okay. So we have the number of repositories. The repository is 16. We have collected 45 million views from Spain and 240 million downloads for this particular country. You can also see the monthly events as arrived in Matomo. You can see the monthly views and downloads that I have already presented in the slides. And in the resources, you can get a counter-release file for a report. For example, as I mentioned, we have the article reports, the item report, the general repository, and the book report. I will give an example for this particular report. If I begin, if I enter the date for it, I will check for the whole period, and I will check for an item, use the DOI. I hope I found, yes, this is what I'm looking for. If I click Get Report, you will see the report in JSON format for this particular item. And the number of requests for January 0, for February, we have four downloads and one view, etc. for March 3, 2, 0, for April, and one download for format. And this is a report that it can be easily retrieved. So in Provide, I mentioned this is the place where you registered, but you can also see the user's events, the user statistics that we have collected in this for your repository. And I will try to log in. I will use the example of my favorite repository of Mino. And you can see the usage counts either in tabular form, either in boxes or in a form of visualization like this in the graph for total views and downloads. From here, you can get the statistics reports, and we have the five reports that are compliant with the countercode of practice list four. So if you check the article report and select, for example, for the first five months of 2022, you will get the reports. I hope there are no issues with our backend services. Okay, so these are the results for the first five months for the Mino repository. You can see for this particular article, you have the zero downloads and views for January, February and March, and we have only one downloaded one view during April. For this particular, we have more information. For this particular article, we have more information. For example, we have one downloaded one view. This is in a tabular format, not in the JSON format that was displayed in the other interface. So we have two options. In provide, we are offering tabular format of these reports, while in the usage counts portal, we are offering the JSON format if you want to retrieve them directly. Usage activities also displayed in OpenAir Explorer. I don't know if you are familiar with this. This is the main interface where you can find, we can search all content that is available in OpenAir Research Graph. Again, I will search for Mino University as a data source. And you can see on the left side that we have the total number of usage statistics that we have produced that we have published. And if you click, you will have the total number of views and downloads for each month. Again, this is a similar with the one that you have seen in the provided taskbook. However, in OpenAir Explorer, this is an interesting outcome. If you set for a publication, for example, for this publication, you can see that this publication has 20 usage events counted for this particular item, either views and downloads. The important thing is that if I click on this button, you will see that we have 19 downloads and one total view. This has been aggregated from different content providers. This is how we exploit the metadata index that are mentioned in our initial one of the first slides. All this information for this particular item has been aggregated and presented in this form. So we can have, if you are an author of this particular item, if you are a publisher, you can see that for this in Explorer, in Portal, you can see how the total statistics for this particular item inside the OpenAir infrastructure. Finally, we mentioned the reports. So the last interface is Monitor, which is a new portal that is offered for monitoring the research in institutions and funders. Monitor offers a dashboard for various types of organizations like funders, institutions, etc. So if I select an institution, which will be public, for example, that has public information, in particular University of Gettigan, I will move to its institutional dashboard. And here the user statistics are in the form of impact. So you can see that for this particular institution, we have 80,000 publications and we have average downloads of publication and the total downloads. You can see also a number of, let's say, interesting indicators that are based on user statistics that we have collected. For example, the downloads per publication over time or downloads per publication over time based on the year of publication. The downloads per publication over time and by project participation with abstract over time. This is important because it shows the downloads of publication by open access route over time, if it's green, bronze, etc., or by downloads per publication by open access route and the year of download. As you can see, there are a number of indicators that we have built and the user statistics are one of the main parameters of this indicator of these graphs that are used to somehow provide the impact of this, to measure the impact of this institution. We also see the top genus and per publication from repositories. You can check this interface and you can examine it and see what would be offered in this from this form of impact, let's say, for this particular interface. So let me go back to my presentation. So, 40 cap, sorry, open and users count services is an easily configured robust service. These are the key messages that I would like to convey from this presentation. It follows countercode of practice standards, release four and release five in the future. And users count service for provided is a form of an umbrella system similar to the EOS that provides information on the impact of your repository by integrating users data and offering comparable analytics of the repository's content across the open area infrastructure. For research products operates on open areas and graphs and allows aggregated user statistics of the research products from repositories all over the world. So you also gain the advantage of doing aggregated user statistics of your research products. So that's that's all for me. Thank you very much for your attention and for your participation. I hope you have liked this presentation and found it useful. I would like to pass the floor to André for the for the beef in order to guide you through the feedback of this for this presentation using this mental tool. We also have some questions in the chat. Maybe maybe we can start replying to some questions and then we can move to the stop sharing my screen and I will try to try. The first question is it's about what represents the views and downloads and why the views and downloads are different. And Carlo already answered about the difference between the numbers of views and downloads that what are the difference between views and downloads. So someone has replied. Sorry. Sorry. I couldn't understand the question. Yes. Carlo replied about the why we have different numbers of views and downloads. Okay. I mentioned in my presentation the push of the push workflow that the number of views are less than the number of downloads because we are offering what we call them indirect hits. We are counting in direct hits. So I also described an example. So I assume that you have a repository. You have an article which has a full text in its landing page. So if I go, if I visit the repository and I click on this article and I click on the download of the PDF, I will have one view and one download. Is this correct? Okay. But if I search for this article in the Google scholar and I receive a search result and I click on the search result and I will go directly to the repository for downloading the PDF document. This is counted as a download only, not as a metadata view. So that's why we have more views than downloads. Okay. We have a question for Amelie asking if it is possible to show numbers for an aggregator. For example, Narciss. More the question was, can you show it? Because we are not aggregating from Narciss as far as I know. Narciss is a Dutch portal. Yes. We are not collecting useless events from Narciss. If you want to join the service, of course, we can participate. Have you discussed it earlier? Yeah, but I've heard that it could be done, but I thought okay. So I have to ask if it can be done. Yes. Okay. From us, it's no problem. I mean, we can aggregate, we can receive useless events from aggregators. So you have to simply participate on the service. Okay. We have another question from Magdalena. Do you know how Matomo counts the views and downloads? Do they filter bots? That's the first question. Yes. Okay. So if Matomo counts views and downloads, yes, it counts views and downloads. It can differentiate between views and downloads. But as I mentioned, we do not use Matomo for our service. We are simply collecting the events, and then we are processing to count views and downloads using the countercode of practice, a list of bots that is updated continuously. So Matomo is more or less used as a monitoring tool and just to record the usage events. All of these, all of our processes are performed outside Matomo. The other question is, do they use a specific blacklist? I will not want our local statistics to show drastically different numbers than the entries in opener. Do they use a specific blacklist? You mean Matomo? I hope so. Because I mentioned that we are, apart from Matomo, we are using the countercode of practice list of robots in order to exclude non-legitimate traffic. We have another question from Magdalena. If we provide our statistics now and later decide that we no longer want to participate, is there an option to withdraw the data? Of course. Yes, you could ask them and withdraw the data. Yes, I have a big question from T. Romis. How do metrics like these differentiate between popularity and impact importance? And this is related with, for example, the papers regarding COVID-19 that receive a lot of popularity. So you mean that we can, how do metrics like this differentiate between, so somehow we have to count the topics that are included in this, that these articles are about. I mean, this is how it can be differentiated. This is a part of, I think it's of the monitor that the set of indicators that we are producing. So I think it will be, if we include metadata information from the, from open area sets graph, this could be feasible. And we can differentiate this kind of concepts. He also, I hope I have answered your question. He's also asking if we are going to monitor also keywords such to determine such intent and how the paper content satisfies the needs of the person who searches for certain information. Not for now, we are only counting events as views and downloads. We did not set any other kind of information, but yes, in the future we are planning to include the keywords sets. And we have a question from Oliver. I have notices for our user statistics that transfer of 2020 have been changes retrospectively. I have collected them about two months ago. And when I looked into the dashboard today again, all the accounts from 2020 and 2021 were higher than before. How can that happen? Okay. Can you send me an email so I can check? And that can reply to you. Yes, sure. I will send you an email. Sorry, Oliver. I will check it. Okay. Thank you. Thank you for the questions. We also have Maria that raises the hand. Please, Maria, you can open your microphone. Thank you. Thank you, Dimitris. It was a very interesting and informative talk. My question is about what is the situation with the usage coming from the publisher side because they have been obviously recording usage for a long time. Do I understand correctly that until they decide to provide their usage data to you, you cannot add them to your service? Or can you still sort of take what is published on their platforms and sort of import it anyway? Hmm. This is an interesting question. Well, if they are available in public and they are following some kind of directives, for example, counter-code of practice in particular reports, then we can potentially collect them. But you mean without somehow signing a kind of an MOU or agreement? Yeah. I'm not sure we will be able to do it. It's a business decision and we have to discuss it with... Right. Because obviously this would be... Sorry, everything that is available in any form of counter reports can be collected by our service. Yeah. It would be incredibly powerful if this service was able to really unite all usage figures from everywhere. Do you know at this moment how the publishers are viewing it, whether they're inclined to provide their usage data or not? No. I don't have such an impression. But we are trying to contact as more publicers as possible and make agreements with them so that we could collect them. Because the software is there, I mean, the tracking software is available. So the only thing that is missing is more or less is the participation agreement, etc. So that's where we are trying to conduct the publishers and ask them, okay, do you have your statistics? We can collect them. And then we can be aggregated, which is an important aspect, a powerful aspect of our service. We can aggregate them and you can take them back easily using these counter standards. So I think it's... Yes, I agree. It's a matter of an agreement with the publishers and this will make our service more powerful. Thank you very much. Okay. Thank you for all your questions. I will now share my screen and it will be quick in order to finish our session. We just have some questions to you about the uses of service, their knowledge, and we would like to have your feedback that will be important for us in order to develop the future developments. So the first question is, have you activated users account service? Majority no. We have four answers for no and five for yes. During my answers and the majority didn't activate the users account service. So today with Dimitri's presentation, we know... Now, how to activate the service? You can access Provide dashboard if you have a data source. You can register data source in Provide and then you can also enable the service. And then maybe we could ask our audience if they have already participating, registering the provided portal. Yes, in some cases we have users that already have their data source registered in Provide but not the users account enabled. So please, if some of you already have enabled the service, the users account service but you not ended the configuration in your side. And if you need help, you can also contact us and you can support you in order to conclude this process. So you already have also the Dimitri's email address in the chat if you need to ask some questions about or not to activate the service. So let's proceed to the next question. If are you using counter release for reports or graphs in opener portals? The graphs that Dimitri is also demonstrated today. Some of you are not aware of these reports and some also are using and finds that they're useful. Some are not using or they found that they are not useful. So for those that are not aware of these reports, we hope that you are convinced about their potential and hopefully in the future you will start using it and it's also important for us in order to better disseminate the service in order for more people to know this service. Let's move to the next question. Are counter code of practice release five new metrics useful? So these new metrics that Dimitri presented today, majority says yes. That's very interesting. Some of you need to learn more about. Yes, of course. But it's very relevant that you find that these reports will be relevant for you. Okay, thank you. Let's move to the next question. Would you like to maintain both release four and release five graphs and reports in the opener provided dashboards? Is that only provided in old interfaces? Yeah. We can maintain both releases in the portals at least for some time in order to give you the opportunity to move from one kind of report metrics to another to the most updated and then we can discontinue. So in the year that your feedback is relevant for us because we will decide according to your preference. We have the participants more or less divided, but we have more people saying yes. Okay. We have more questions. Let's move to the another. Are more detailed reports, not master reports, useful? So are more detailed reports useful? Seven, seven, eight, nine people say yes. And more people say that they need to learn more about these detailed reports. Okay. So next question. When accessing the reports, would you like to see a preview of the first 20, 30 registries, for example, and then an option to download the full report? May I explain somehow what this means? We found that when the report is very large, we are having issues with the performance. So what we have examined is to have the report instead of being in a tabular form or the whole report, to somehow compress it and make it available to the one who has requested the report in a zip file. So instead of having the report displayed in the open air provide, for example, a link with a download of the report will be available. So this is the question. If you would like to see a preview of this report, and then you have also the option to download the full report. So there are two options. Either we have a preview and the download the full report, or you simply have the option to download the full report and not having a preview. So this is the question. So it looks like it would be useful to have the first 20, 30 registries. Okay. We are recording that. Okay. Thank you for your feedback. Let's move to the next comment question. Would you like to have access to more detailed graphics in the provided experts such as the downloads per publication over time and by open access routes? So the impact graphics that Dimitri showed in the monitor portal? Yeah. It seems the participants would like to have them. We have to change a lot. Okay. And also the possibility to select which graphics to showcase in dashboard. It's also relevant. Okay. Okay. Thank you for your feedback. And we have one last question, I think. Do you find useful the number of views and downloads displayed in the data search page in Explorer? For example, do you mean your repository example that Dimitri showed in the demonstration? I was expecting the last answer. I mean, this would only be displayed in Provide because somehow these numbers show the performance of the repository. And so maybe some repositories were to keep them somehow for their eyes only on this information. And it's also important to improve the way we are presenting the numbers in the portal. Six participants saying that. Okay. Let me check if we have another question. No. Okay. Many things for your feedback. Let's see if we have some additional questions in the chat. Okay. I don't know, Dimitri, if you want to say some words. No, I just want to thank you for participating. I hope you find it useful, our service. And I'm waiting for registering for the service and supporting you for deploying and see some of your numbers in our service. Okay. Many thanks Dimitri. Thank you very much, André. For this webinar. I think it was...