 to the next screen. Okay, so according to the program, we'll move on with the portfolio monitor. Well, first you will have a presentation about the Open Air Research Graph, and then monitor dashboards, call explorer, open citations, open APC and usage counts. So we are waiting for Claudio now to share his screen. So thank you. I'm Claudio Azzori, greetings from PISA in Italy. I work for the National Reserve Council and I'm going to give a brief presentation of the Open Air Reserve Graph. You heard about the Open Air Reserve Graph many times already today, so I hope I'm going to clarify a bit what it is about. So the headline here is putting research into context and making the connections. So why connections and context are so important. We can define with a very generic definition of the Open Air Graph as a collection of metadata records describing objects that have a role in the research life cycle, along with the relationships among them. This is a very high level definition, of course, and I'll try to get a bit more into the details. On the left, on the right here, instead, you can see the principle that guides the realization of the Open Air Research Graph, and they basically materialize in the current content acquisition policies. So it tends to be an open collection of metadata records, and it has to be as much as complete as possible, meaning that it should include all the trusted and known sources that have a role in the research life cycle. It has to be the duplicated, because it has not to contain ambiguities in the way statistics are calculated over it. It has to be transparent, including provenance information for the different bits of information it includes. It must support a participatory approach, meaning that it should not be the result of a closed network of contributions. It has to be decentralized in a sense that the ownership of the contents stays where the material is deposited. So in repositories, Open Air aggregates metadata records, so bibliographic descriptions, the full text, and these metadata records stays owned by the original repositories, and have to be, of course, trusted. So an important goal in the realization of the Open Air Research Graph is the control of the quality of the metadata records. So next slide. Here we can see a very high level definition of the basic data model. It has its center research products, so which are declined in as publications, datasets, software, and what we call other research products. So objects that do not fall under other categories. Research products are then linked to funding from projects, which are further categorized by funding streams and projects in which organizations participate. So from this model, a conceptual view of the model, the graph is materialized according to the different use cases. A few numbers about the Open Air Research Graph. Currently, it includes 23 funders, more than 78,000 content providers, direct and indirect content providers is what mentioned in here, more than 3 million projects from Europe and beyond, 123 million publications, a different case of software, 40 millions of research data and 8 million of other research products. But these numbers here are just the tip of the iceberg as the amount of data actually contributing to the graph is much higher as this is just the result of the aggregation and the disambiguation task. So why to use the Open Air Research Graph? The three main pillars, motivating the usage of the graph boils down on the need to tackle reproducibility and transparency that requires to track all the research outcomes and the relative context. So the coverage of the scientific objects, the scientific outputs that are in the graph, it's important. Monitoring the quality and the impact of the open science movement has to be transparent and reprocessable processes for all, including the research context. And of course, discovery of reproducible science outcomes have to enable new way to discover these products. So not only articles related to a topic, but exploiting the context surrounding a given research output. So the relationship it has with other edges in the graph enables to define new services and new discovery capabilities. So who are the users that are interested in using the graph? Typically, as my colleagues will present later, content providers, publishers, funders, institutions, research infrastructure and so on are the use cases around the typical users that define the use cases for the implementation of the open air services in the open air portfolio. I'm going to illustrate a bit, so not to take away the floor to them, use cases around perhaps what is most interesting for data scientists, so how to access the data in the open air research graph. So it can be accessed essentially in two ways, as programmatic access to through API OpenRU and, as Paolo mentioned just a couple of minutes ago as data dumps on Zenodo, on which periodically OpenR publishes different perspectives of the graph. Funded products, products related to research initiatives and research communities, products related to the COVID-19 case, scholar links from scholar explorer, dump of linked open data and the full open air research graph that includes the different entities mentioned before. So on the develop portal, develop OpenRU, there is the documentation available to know how to consume the open air APIs, so the different facets of the API are described there. You can go on the portal and learn more about these, how to use the open air APIs. While instead on Zenodo, since it was mentioned again a couple of minutes ago, there is a community dedicated to host the different dumps that are regularly published more or less every six months. The complete version of the open air research graph gets updated there, so you will be always available to download this fairly large collection of data and further analyze it. So how is it positioned in the OpenR ecosystem? This picture was already shown and highlights which is the full processing pipeline implemented in open air that starts from the left from data sources, including Zenodo, Argos, Amnesia and all the thousands of repositories that contribute to open air. You can see also Microsoft Academic, Orchid and Crossref here contributing to what we call the first materialization of the graph named ROW, which is then disambiguated in order to identify different manifestations of the same research output, for example, a preprint and a postprint. We don't want to count them twice for the statistical analysis. Then the graph gets enriched thanks to text and data mining algorithms available in the system that relies on the information available in the graph itself, so many algorithms leverage on the abstract and the full text whenever is it available to infer new links, for example, references to projects and extra properties like subject classification terms. Finally, the graph gets materialized in different backends that serve the different services and use cases that we implement on top of it. So we can say that the graph plays the role of the backbone of many open air systems, providing data to different perspectives, so the use case is basically a different way to look at the same data. So what you should remember about this presentation, a few points, so what the open air research graph is an open metadata collection of interlinked and cross-disciplined scientific products with open access information linked to funding, research communities and much more. It plays the ground on top of which different services in open air portfolio are built and you can find more at the dedicated website, Graph Open Air U. And that's all. Thank you for your attention. If you have questions, just write them. Thank you, Claudio. We also have a dedicated GNA at the end, so let's move on to the next presenter. Yeah, we can move on to Scholar Explorer, Sandro. Good morning, everyone. I'm Sandro Labruzzo from CNR in Italy and the responsible of the Scholar Explorer services. So what is Scholar Explorer? Scholar Explorer is a service that provides access to a graph for links between dataset and literature objects and also between dataset and dataset objects. These links are vested from different scholarly communication sources, then they are resolved because sometimes the links are only a reference to a persistent identifier, harmonized, and the duplicated. Furthermore, links are exported using a standard format, that is scolex.org, and service is accessible also via REST API. Why you have to use Scholar Explorer? Because linking research data with literature is a great value, but sometimes there are a lot of problems in exporting these links, because sometimes there are bilateral chords between the publisher and the centers, because the links refer to different persistent identifiers. Sometimes you have the links referred to a UI, sometimes to its accession number, and the links are exported in different ways. So Scholar Explorer tried to aggregate links from different data sources, exported in a standard way, and allows a scholarly communication data sources to share links with any consumer, and gives also the possibility to find all the links related to its product, because you can resolve on the fly the persistent identifier and find which are the entities related to. This is possible because, as I said, Scholar Explorer makes the resolution of the persistent identifiers, and also exploiting the application, we can also infer new links, because, for example, in this case, if we found these two data sets are the same, we can infer two links between different persistent identifiers. Some numbers, we harvest from something like 22 data sources, we have 50 million of data sets, 40 million of publication rounds, and 900 million of relationships that are exported in a standard Scholic's metadata format. You can access this graph of links through the API, we have an API, we have something like 10 million of requests per day, and we also publish the data dump on ZenOdo every six months, more or less. Well, Scholar Explorer is positioned in the open-air ecosystem, Scholar Explorer is part of the open-air research graphs, so it provides links to the open-air research graph. At the moment, there are two different systems, but in the future we'll be converging a single system as Scholar Explorer should be a view of the links inside the open-air research graph. What you should remember about this presentation at Scholar Explorer provides an open metadata collection of links between data sets and little objects, and also between data sets and data set objects, and not only continuous persistent identifiers, but we're trying to resolve and get all the possible persistent identifiers like accession numbers and other types of identifiers. You can find more information on our website, and thank you for the presentation. Thank you very much, Sandro. If you have any questions, please use the chat. Yes. Sandro is here to answer, and we can continue with open citations. Silvio? Hello, can you hear me? Yes. Great, so let me try to share my screen. So, good morning, everyone. I'm Silvio Peroni. I am a computer science researcher at the University of Bologna where I work as associate professor, and I'm co-director of open citation together with David Schapton. So, it is open citations. Very briefly, we can say that open citation is an infrastructure organization dedicated to the publication of open bibliographic and citation data. We use semantic web technologies for storing and providing this data to the user, in particular RDF, OWL, and Sparkle, as query language for running queries upon this data. All the services that we offer are free and open. We have made available different ways to access the citation data that we have, addressed APIs, Sparkle and points, and visual interfaces. And in addition to that, all the data that we publish are totally available online and can be downloaded in bulk. We provide every two months, more or less, new releases of the data that we host in open citations. And in 2020, just to mention something that happened last year, we got more than 23 million requests to our REST API that currently is the most used service for accessing our data. Why to use our services? Well, we have created open citations in order to liberate some sort of facts that should be open to the community, indeed citations coming from a research article. The idea is to make research evaluation exercises more transparent and reproducible by opening these data and also saving a lot of euros to institutions that usually, of course, pay a lot of euros for accessing commercial services to getting exactly the same amount of data. One of the points that we are very proud of is that all the data that we provide, citations data that we provide, can be reused for any purpose. Indeed, we use the CC0 Creative Commons zero waiver to make available this data to the community. And so anyone can reuse the data for any purpose for developing new services, for instance, to monitor research, to improve the discoverability of research projects of a specific institution, of a specific researcher, of a research group, and these kind of activities. Thus, our goal is indeed to add to the user of exactly value for reaching this specific goal of monitoring and discoverability. Currently, we have available in our system more than 759 million citations, number that is now growing thanks to recent actions from existing publishers such as Elsevier, the American Chemical Society, just to mention some of them that recently have released their reference list in the open through Crossref, something that we are now processing. And so this number of citations that we have will grow a lot in the next month. And the goal that we have is basically to help any person that may be involved for some reason, being researched, monitoring for providing information about articles in particular, such as a researcher, authors, students, administrators, and librarians. So there is a huge amount of user within the scholarly community that can benefit from the services and the data that we may available. Let's say there are different ways for using our citation data. As I anticipated, the most used one is the REST APIs. We have a bunch of REST APIs that allow a user to access different collections that we provide. There are also sparkling points that allow to make more complex query upon this data. And of course, dumps, that is something that is used a lot, at least in the past month. A lot of dumps have been downloaded and we provide them in three different formats, CSV, RDF, and also Scholex. Just a very quick visual example of what you can do by using the REST API. For instance, you can ask to get all the citations that have been received in the past from a specific article identified by a DOI. You can do exactly the same thing for the references. So the citation that a specific DOI is doing to other articles. We have operations for getting the citation counts of a specific DOI and these kinds of things. With Sparkle, as I anticipated, you can run a more complex query. This is just a very simple example of one of the queries that you can run by using Sparkle. For instance, you can get all the articles that are co-cited together with another article that you specify as input having, for instance, the DOI that I'm showing in the slide. So these and other possible queries are of course possible. Sparkle is very, very flexible from this perspective as a query language. And by using this service that we made available, there are third parties that I have started to develop some applications by using our data, using the REST API or the Sparkle endpoint, or just downloading the full dump of the data in order to provide additional service upon this data. These are only a bunch of options. I've been developed some tools for getting the citation count of papers that are visualized within a browser. Open Access Helper is one of these services. VossViewer is a tool developed for Scientometrics research that allow you to show the map of science according to a specific citation network that can be retrieved from some sources. And Open Citations API are used exactly in this tool for retrieving citation data, for creating this wonderful visualization. And also there are, let's say, repositories of bibliographic information that are used in citation data from Open Citations, like insightful for getting and building the graph and exposing some information about a given and provided DOI or keywords. About our current position in the host, just to say that we are new to the gang. We are one of the additions that started thanks to Open Air Nexus. So our more close relation that we will have is, of course, with the Open Air Research Graph. Since we are producing, we are exposing a graph of citations of articles to article citations, our more suitable, let's say, position within, setting within the Open Air Nexus project. And also within the host is the close interaction with the Open Air Research Graph. Of course, by means of this interaction, of course, our data can be used by all the plethora of other services that have been presented in the previous presentations. I'll take away a message very, very quickly. As Open Citations, we publish global scorey Open Citations data that can be reused for any purpose or the data that we publish are licensed with a CC0 waiver. We have several services, but in particular REST API, Sparkle and Poins and Dumps that can be used to gather citation data from our system. And currently, we are hosting more than 759 million of citations in our system, but we are processing new ones, so more will be released pretty soon in Open Citations. And I think that's basically all. Thank you. Thank you very much, Silvio. There are some questions, I think, that's out. So it will be nice if you can answer in the chat also afterwards in the Q&A. So my name is Jochen Schulwagen from Bielefeld University Library in Germany, and I will talk about the Open APC Initiative and Service. Open APC was initiated in 2014 by the German Initiative for Networked Information, and since then it is developed and operated at Bielefeld University. The aim of Open APC is to collect and aggregate and publish fee and cost information on open access publishing from, we say, participating institutions, so those institutions that deliver us their cost data. So initially, the focus of the service was on article processing charges, so called APCs. That's why the service is named Open APC, but today it covers even more kinds of cost data. Open APC allows for web-based visualizations for end users, but also offers an API, so called OLAP cubes for online analytical processing. The service itself is free to use, so it's released on GitHub, and the data sets are made available under an open database license. The data set on Open APC covers around 118,000 articles with an aggregate sum of publishing fees of over 32 million euros, and the data is contributed from around 282 organizations while the majority is from Europe. So why to use Open APC, and who are the stakeholders? The aim is to make the developments, the cost developments in the field of open access publishing more transparent and comparable. And in this way, Open APC complies also with current recommendations for cost transparency. For instance, there's also a principle in the Plan S initiative from the Funders and Coalition S, and it's very helpful for them as it offers funders a potential to standardize and also to cut the payments to avoid that the prices are steadily growing. And another example is also the report from the expert group to the European Commission on the Future of Scholarly Publication and Scholarly Communication. Today, Open APC releases data sets not only on article processing charges, but also since last summer it collects cost information related to open access monographs, so called book processing charges, and also tries to collect information on so-called open access transformative agreements, which is a quite tricky and complex task. The service is aimed for libraries and funding agencies, but also researchers and developers, for instance, the domain of information sciences or bibliometrics in order to keep track and provide access to the open access record of expenditures for publishing fees and other kind of cost data. Open APC can be used by a web user interface, which is using tree maps to visualize the distribution of a cost among institutions, publishers, journals, or books. Another way is to use the REST API of the OLAP server and third possibility is to access data sets on GitHub directly as CSV files. Here's an example of a tree map where a user can browse and inspect the APC data set. As you can see, there are a couple of options of use. At the moment we see the institutions, but you can also switch to a publisher view or a view about the journals. And a couple of filter options by time period, by the status if it is it reflects only cost information from good open access journals or also hybrid journals and by country. Also, it's possible to embed the tree map in our web pages. The second example is how to query the OLAP server. The concept of an OLAP server is based on cubes, which is in this case a Python framework for reporting and analytical applications. And in case of open APC, each cube represents cost data from contributing or participating institutions, but it can also represent all aggregated data, for instance, for the journals for the publishers. The API offers several operations, like to list entries by institutions, for instance, or by journals, and also offers a couple of aggregation functions as shown here. And this example where there's a query to aggregate the cost information in a time period from 2014 to 2016. And the result is a JSON file which provides the cost information of APCs, the number of articles, the average sum of the cost information and the standard deviation. So how is open APC positioned in EOSC and the open-air ecosystem? It is positioned in the open-air ecosystem and this way it will become a part of the EOSC ecosystem. Cost data contributing institutions like libraries or consortia provide their data files to open APC with the cost information and with information about the related application metadata. And open APC then aggregates this information and widgets with further publication attributes from several citation databases and makes the datasets available to the interested, to the open-air research graph. And this way it will be one of the components in the open-air monitor services portfolio. The takeaway message is that open APC contributes to transparent and reproducible monitoring of fee-based open access publishing, which is across institutions and nations or countries. And the datasets are regularly released on GitHub. On these dimensions of fees paid for open access articles and monographs on the one side and on the other side, it provides cost data from transformative agreements with publishers. And today the APC dataset covers, as I already said, over 180,000 articles with fees of over 232 million euros and the data is currently contributed from over 280 institutions. Thank you very much and I hope there are a couple of questions or comments. Great. Thank you very, very much, Johan. I see some questions also for more speakers. Yes, they've already been answered. Yes, don't worry. So, I hope now Dimitris with usage counts has a good stable internet connection. So, I'm Dimitris Piracos. I work for Athena Research Center and the usage count service is the use statistic service of an open-air research graph. What the service does is to collect use of data, use statistics reports for open-air research graph products and for an open-air distributed network of repositories using open standards and protocols. In other words, it simply counts and collects the number and item from the open-air research graph is viewed or downloaded. And the source of this formation is institutional repositories, data repositories, national aggregators, etc. The outcome of this process is the generation of reliable, consolidated and comparable usage metrics which are compatible with a standard provided by counter to counter code of practice standard. The engagement of the service can be shown by indicators like the number of research products that are counted, 303.5 million research products that we count their usage from almost 200 content providers, and we have collected 100 million views and 380 million downloads. Just a quick presentation introduction of how we do it. We follow two approaches, the push approach that collects anonymized, that we use it to collect anonymized usage rule activity, and the push approach to collect standardized user statistics reports. We combine information from both approaches in our user statistics database and we publish the results in interfaces like explore or provide, or we make them available for retrieval using the SushiLite API standard protocol. So why to use this service and who can use it, who can benefit from this service? We consider usage count service as a measure of scholar impact. The importance of, we consider that the importance of a research item is directly related to its usage. Therefore, user statistics provide a kind of an indicator that complements other traditional alternative bibliometric indicators and they provide a comprehensive and most importantly recent view of the impact of an academic resource. So who can use the service, who are the stakeholders of the service? They could be authors, institutions, open signs, platforms, funders, etc. And they can find information like which funder has the biggest engagement in Europe or give me the evolution of the popularity of the publications or data of a project within the last five years. An important feature is that by combined with a metadata duplication, functionality offered by open research graph, it enables the accumulation of usage for same research outputs. For example, in other words, provides indicators for the user of an item across all the content providers from which the item has been harvested. And finally, there's the service operates towards standardization scheme like the countercode of practice release four that is offered now and soon for the release five of this standard. So how to use usage counts? You have to register a by provide and you can view your statistics in either provide, explore or the dedicated service portal usage counts. And you can retrieve counter reports by the Sucilite API interface, which is also offered in the dedicated service portal usage counts. More details about the registration we offer instructions or to install the service specifically for various platforms like the space or ePrince. And we are also offering a generic Python script that could be applied, can be deployed in any other platform and collect usage activity. And so in provide, you can also view some results from the usage counts service and user statistics. Similar results, you can also find in explore or usage count portal. But in explore, you can also, we could also find an important feature of the service, which is mentioned before, which is the accumulation of users for the same item across repositories. For example, you can see that for this particular publication, you can have aggregated user statistics, but we also split this information in two different repositories from which this publication has been collected. Finally, the service offers standard API, which is based on the Sucilite protocol, where you can retrieve information for the usage of either for a publication or an item or a particular repository. The service, how is the service is positioned in open air ecosystem, in open air infrastructure. As mentioned before, the service is part of the monitor portfolio and it counts the usage of the research product, which are the main entities of the open air research graph. These entities are created by duplication, harvesting and mining services of open air research graph and collected from content provider like institutional repositories, aggregators, data archives, etc. And the usage count service counts the usage of these research products. So as I take away methods, usage counts provide standards for user statistic exchange for almost all type of corner providers and platforms. It complies to the countercode of practice and allows the exchange of reliable and comparable user statistics reports. It follows the GDPR guidelines and provides in order to respect the user's privacy. It offers global coverage, which support institutional repositories, content providers from all around the globe. It supports analysis via APIs and visualizations. And if you need further information, you can find it in our portal, the service portal, usagecans.openair.eu. So this is more or less the presentation of the service. Thank you for your attendance. If you have any questions, please go ahead and try to reply. Now let's move on with Ioana and monitor dashboards. So I'm going to talk about the open air monitor that some of you may know already. So the purpose of the monitor is to provide the tailor-made customizable monitoring of research and innovation activities and their impact. The idea is that it's an on-demand service where you can create your own monitoring dashboard with our help that it's highly customizable in terms of organization and viewing of different indicators. And the purpose would be, as I said, to emphasize to track research activities, for example projects and publications and their impact across different dimensions. You can also view the public dashboard of other stakeholders. As a monitoring tool, the purpose besides monitoring is of course to be used for policymaking by means of evaluating past performance in order to decide for the future and also of storytelling. And we have added on several functionalities to make sure that the visualizations can also be used for reporting purposes and the data for internal analysis of the different indicators. So let me get a little bit more into the functionalities that we added in order to satisfy all these purposes. So the indicators are based on the open air research graph. So on the duplicated entries harvested from several sources and text mine, they're well documented and they're meant to be timely and reliable. And you're able to download both the data and the visualizations of the indicators. There are also filtering functionalities that apply to your downloads as well. And I'm going to emphasize again the fact that these are customizable in the sense that besides the fact that you can group yourself the indicators to categories of interest, since it's based on the open air research graph, which is such a rich source, there are several ways to organize and view the data. Therefore, with discussions, by discussions with us, we can create different indicators that may be of more interest to you. You can also invite your own team members to view and edit the dashboards. And you can separate private indicators for internal monitoring and public indicators to show to external stakeholders. Okay, so the idea is that starting from the graph, we provide us with some information. For example, a funder could provide the information of the set of projects that they're interested on and their funding scheme. And then we get the bolding you can validate that the indicators and so on that you view are okay with you. And we set up your portal, which you can use for your monitoring purposes. Let me say at this point, by going back one slide, that right now we're in the process of setting up some funder dashboard. You can already view the EC's dashboard, which is public. But the service can also be used for institutions, research infrastructures, and other type of stakeholders. So anybody who needs a monitoring of the research activities, we are able to build their dashboard. All right, in the open air ecosystem, we are the nice thing in color here at the end. So after the data pass through the all the different services and make them reliable and open and wonderful, we are in the post processing part at the end. Another way to view it is this way, this information is also available on the platform. And the takeaway from here is, I think I went a bit too fast, I'm sorry, that we are able to use the open research graph, which is usual updates and so on, in order to build a rich set of indicators that can track research activities for funders, institutions, and research infrastructures. This is an on demand service where you contact us in order to be able to set up your monitoring dashboard the way that best fits your monitoring needs. And we have also added some functionalities that can allow any stakeholder to use it for reporting and analysis. Only now it ended up being a business intelligence monitoring tool. Okay, thank you. Okay, thank you Anna. I think with monitoring we complete a phase that we see that for many stakeholders, many needs, many users, many problems, we have solutions so far. So you identified funders here, the institutions, and it's like one prominent is also the European Commission that is using the open air monitor service. And thank you very much. And let's now move on to the next Q&A session. Yes, most of the questions are being answered so okay. Yes, we are on time. One thing that monitor is built on top of the graph but content from the graph of course can be taken by others who want to maybe run different kinds of experiments or extract different kind of information or double check that what we came up with is effectively resulting from the original data. So that's the idea. So content is open for others to use and build different services including of course companies or organizations etc. And yes and now as you can see we have the same code on mentee. So please use this code again. You see it also in the chat and try to answer three more questions please. So question number one. Which service you know or are familiar with if you're familiar with any of the services that were presented now? Again we would like to see more than 30 people you know participating so we have quite good results. We can have an overview. We are almost half. It's also good for us to know where to focus on in the next let's say events, dissemination and communication activities. For example if users counts is number is last then we can say okay we can we can give you more info about it maybe you need to know more maybe you know it's something we can work on. You can yeah let's wait three seconds five yeah half 24 and we can move on to the next question maybe. So which service do you use in your funder institution or organization or research community? So although they are different with different objectives we don't know if you use any of them. Yes there's a lot of people who doesn't use none. So I suppose here you know researchers cannot answer clear because we need research communities, funders, organizations. No no. Feel free to open the microphone if you want to add some some information or just a comment or so clearly. Okay we can move on to the next question. The majority doesn't use. Yeah and at the same time you can you know write any question in the chat. Everything is recorded so don't worry you'll get your answer. So in which way do you think this service could be useful on your daily activity? Okay can we see the services? So it means that think again your daily life, your work, your activities, what you have to do to present, to deliver, to finish and how can you imagine that the last services that you saw I will repeat them. Monitors, call exporter, open citations, open APC and users count. Can be useful in your daily life, your daily activities. So there's a comment here from Joe Havenman. He says looks like open air nexus is involved into a fully holistic research management system fully compliable with all kinds of policy, European Commission and principles, open science. Is the European Commission already proactively recommending using open air services and now flexible, adaptable and interoperable is the nexus system to alternatives of individual services. Is the common goal to develop a fully interoperable global ecosystem and year with have Eurocentric chapter or it for research management and for the presenters or even for the follow I don't know if someone wants to answer. Yes I was typing in an answer. Okay thank you. I typed it in but I can expand it's a great question. Yes of course but we are providing services as mature as possible we're trying to make them compliant with the request from from the specific stakeholders we are dealing with but at the same time the idea is to build services that overall align with standards and interoperability frameworks driven by the needs of the community. So I made an example here but the idea is not that open air services will become the unique services serving the commission or the use for even certain to provide given functions but that these services will be there would be an opportunity for those who need to use them will align to common interoperability frameworks decided together so that other services maybe more specific to one or another community can comply to the same interoperability frameworks and third-party services can take advantage of that. Episcience is a good example I can build an overlay journal but still rely on several kinds of repositories out there how can I do that how can I build a system like Episcience without suffering too much by relying on common interoperability frameworks of data exchange APIs to which the individual data sources out there in this case in order how archive can comply to and this will be I think the major driver behind the EOS as a collaborative opportunity enabling this kind of interactions by sitting down at the same tables and offering of course enabling services in the middle like registries that allow me to find out which other services compliant with framework and build services that can take advantage of that. Yes thank you so much Paolo to have an answer again we are working along the scheme much smaller scale with Africa archive thanks for the response thank you so much so maybe we could move on because we are a little bit late so now we're going to have a five-minute break and then we come back again here to continue to show our Discover portfolio so see you in five minutes