 So good morning and welcome to this last day of Open Access Open Air Open Access week. Today, we have a long week with a lot of sessions happening during the morning where we presented some of the open air services and two Knowledge Café during the afternoon. And today we have our last session. So before going, just some housekeeping rules. As you all know, and if you already heard this event will be recorded. So participants, we ask you all to have your microphones off. If you want to participate, use the chat to address questions, doubts, comments to the speaker or at the end of the session, please open your microphone or raise your hand to speak directly to the speaker. The presentation and the recording will be updated in the event and will be sent to you by email. So we have a strong presence in social media, so please tag us or use the hashtags of open air or an open air week of open air services. So, today, we have the session dedicated to a recent service. It was integrated in the open air project in open air nexus project, the open citations, which is an infrastructure for the graphical metadata. For the speaker today, it's even Ivy, which I thank you for anticipated for your presence and for accepting our invitation to be here and to present the services and now it is integrated in open air and open air research graph and now we can see all the work and infrastructure in the in the graph. So the floor is yours. Thank you so much. Thank you. Thank you, Paula. Okay, so I will show my screen. Okay, I think you all see the presentation now right. Yes, so good. So, hi everyone. And my name is even hey be I am a researcher at the University of Bologna and a computer scientist. I'm a developer at working at open citations, and they're responsible for the technical infrastructure. Today we're going to go through some, let's say, some important points, we will start from a general overview of the infrastructure of open citations. And what do we mean by citations how we treat them the data that we host and the infrastructure, and some statistics statistics. Then I will go through a discussion on how open citations integrated with other services, particularly with the open air nexus project and the usk some future steps. So what we are planning on where we are already developing and planning to release as the future steps and future projects. Finally, I will go through some small presentation here about how to use the data of open citations together with a demo to to to go through how to to practically do these kind of things. So, a quick overview. So what's open citation is an independent infrastructure organization is currently hosted by the University of Bologna. So which is the legal entity hosting open citations. It is dedicated to the open scholarship and to the publication of the open bibliographic metadata and citation data, mainly, and it uses the cement with the so called semantic web technologies to model the data that is hosted inside. So the organization in it is engaged in the advocacy for the open citations and the open bibliographic metadata. This is done basically by to two main initiatives. The eye for OC so the initiative for the open citations and the eye for oh a the initiative for open abstracts. So we mainly provide open citation mainly provide our first of all the data model. So, all the data that are handled by infrastructure are modeled according to the open citation data model, or the so called CDN. So these basically on the spar ontologies so going back to the semantic web technologies, this, this is some ontologies to to model the the citation data that is hosted and saved in the open citation, and it has been recently included in the fair sharing service. Then it provides also bibliographic of course and citation data all under the CC zero wave. So basically, there is no limitation on or restriction on the usage of the data. And the main data sets that is handled are called the open citation indexes. So these are the data sets that contain the citation data of open citations. And also, or we provide also the software that we develop that all developers or we all develop to release some services along with the producer of the production of some of the data of open citation. We all released under open license and all deposited into GitHub repositories repositories. So these can be reused and faster the reproducibility of the same of the same procedure. There was some online services so we basically have rest API is the sparkle and points which are the, let's say, the portal, the, that could be used to to to freely query the data that that are saved in open citation. So we release some dumps for the data set that we we produce, and some user friendly interfaces so web interfaces to search through the data of open citations. So as you can imagine citations are the main, the main key and the main information that we are that we focus on what are what what is a citation basically it is a link between a citing article and a cited article. In scholarly publishing this link can hold a lot of information because it and it encapsulate a lot of information about how also the citing article is using the cited article and why this and why this link happened so this is it has a lot of information. So, to to to represent the this this link we need some basic metadata about the citing article and the site article. And then, as open citation we want this data to be open. So the citation the citation should be open. What do we mean by open. So we basically want this data to be structured. So we needed to be saved in some format that can be reused also by machine separable so we can access and we can. We can access the data from different services. And of course without any restriction on the usage of that was such data. So, since we use semantic web technologies and all the data that we host are linked open data. We define the citations data in our infrastructure using the OCDM model of open citation. So basically what we want to do to do here is to represent the citation itself as an entity. So, basically in the the previous presentation here this slide. The citation was only a link between two entities. Now, we want to have the citation as the main entity, and we want this entity to have some attributes and some characteristics. So, we have the citation, which is the main box let's say we have the citing article, the cited article, the time of the creation of the citation, the time span so the difference between the publication of the article, and the article that is cited. So the publication of the cited article. And whether this is an alpha solicitation, or it is a journal solicitation. So what is the main advantages about this is the fact that we can hold all the information regarding a citation in one place so in one box. This will make such data easier to describe, to distinguish, also to count for bibliometric reasons, and to process. So, basically, since we have this kind of representation, we also need some identifier for the citation itself. And we have defined the so called open citation identifier. So this basically is a globally unique persistent identifier, which identifies the open geographic citation stored in the data in the data sets of infrastructure. As I said, the main data sets are called or represented by the indexes. So currently, and the last release of, of the last data sets of index on August on this year contains 1 billion and 363, a million, almost million citation links between 75 million bibliographic sources. So these are all data that is that are hosted in open citation and our model, according to the OCDM, a data model. So how to to use this data we provide different services from REST APIs to, as I said, the Sparkle endpoints. So Sparkle is mainly the query language used in semantic web technologies to query the data. So we provide the dumps. So some dumps in CSV format and in RDF format, basically, and some user friendly interfaces to, again, a query the data that are stored. The main indexes for the citations are called Koki and Kroki. So Koki is the open citation index of crossref open do it to do citations. What do we mean by that these are all citations that has been, let's say, a retrieved from crossref and that are open and available in crossref. So the data are from crossref and this is based on the crossref dumps too. So when crossref release some dump for the data, we update our data set with such citations using the open citation data model. We really represent this data with the model of open citation. The second one, which is a smaller one. So Koki is the bigger data set. Yet the second one Kroki is called Koki and it's the crowd sourced open citation index. So in this, in this case, we are not, we are not like querying a big service such as crossref. We rather, we rather get the citation from the individuals from authors. So each author identified by our kid can deposit the citation, the citation information about his articles and again also under the CC zero public domain waiver so we can re embed this data in the infrastructure and expose it about the citation coverage. So basically this isn't an article that has been published one year ago because this is based on the last Koki dump of 2021 dump. So it compares the coverage of Koki compared to other more popular, let's say also maybe indexes, citation indexes. So we can see that on basically Koki 2021, we moved from 25, 28% of the coverage to 50% of the coverage. Of all the of the cover for the citation covered coverage or all the resources combined resources combined. So we are very close to the percentages of web of science for instance or dimensions. Here we have still this diagram Microsoft academic. And yet, this is maybe moved to open Alex and we, we think that these are the same the same percentages also for open acts. So some statistics regarding the, the data and the dumps that we that we are generating. So, in this, this chart here, the representation starts from June to 2021 to August 2022. So as we can see here, so we moved from almost almost a 800 million citation to the 1 billion 300 million citation here basically we have two important events, which I have, which are in this red circle right here. So, on August 2021, basically, was the, the, yeah, the date when the article e publisher published the site, it's citation openly on crossref. So this was a huge embed into indexes of open citation. And on August 2022, basically, crossref removed the opportunity to to make the citation limited. So all the citation that are deposited in crossref now should be open. So that's why we have this. Let's say a bit more gap compared to to to the differences with the with the other years with the other dates. So how it is open citation integrated with the other services. So as part of the portfolio or catalog of open air, and it is part of the monitor, the monitor process in the open air nexus project. So the main contribution is regarding the research graph. And the idea here is to to ingest the data for open citations into the, the open air graph, and also to to to have a link back of the, of the data of the, of the graph into open citation. So basically, the first, the first thing so the data of open citation are ingested into the open air research graph, and we have to move forward for the the second part. Secondly, we have we have developed some software for the for the ingest the ingestion again to and from open citations. And this could be useful for for other services of nexus such as epic science, for instance, so it can integrate the the data and query data of open citations. So, it is also exposed and the US market place, and it is used by many other services. So, for instance, we have was viewer so which is a, let's say, a network displaying the citation network, based on the citations of open citations. And it's used also by for instance, the P to be. And so, yeah, and Jico and yeah, all the other services. So, for regarding the, the ESC, some of the main, let's say, principles are the openness, the fairness in the fairness of the structure so open citation as I said, provide all the data on the CC zero waiver so it's very devoted to the to the fact that data should be open. So all the citation could be reused without limitation without any restriction and fairness because all the software and all the the methodologies used to develop the to develop and produce the data that the citation is available. So, could be reproduced all the methodologies and methodologies and the process could be reproduced. And in fact, recently we have also presented at the TPD as conference, some theoretical mobile to move not only the software but the infrastructure itself to a state where it can be reproduced so where we would like to define the infrastructure as a distributed service, and which can which could be reproduced in case we want to move the infrastructure to move to infrastructure to some different host, a host or so this is basically something that we have just designed theoretically, but we would like also to move this to this kind of strategy in the future. So, the KPIs of applications are based basically on the data are indexed so this first graph represent the index of data that I've shown before. And then we have also the usage of the of the data. So, here we represent the usage with two different classification, one is the API's. And these are the main, the main usage of the of the data of open citations, and we reach almost 34 million API calls per month for for for to query the data in the data sets. Other classification is, which we have called here that sets is based on all the other services and all the other mechanism that we provide to to to access and query the data. So it can go from the search, the internet search interfaces to dump downloads, and to all the other mechanism. So, what are the future steps. Before, I, yeah, I list the future said all the things that we are planning to do or we are doing right now are are reported to Trello. We have a Trello, we have a Trello page where you are sure, which is public and you can access and view all the the process that we are going through and what we are planning on on on releasing so here I have just mentioned a link to to the blog post to describe this kind of things. Okay, so what is the main and let's say then the one of the key thing that we are planning on release on releasing. It's called open citation meter. Open citation meter should enable to store in house or so in the infrastructure itself bibliographic metadata about the citing and decided publication that are already involved in all the citations that we have indexed and we have in the data sets of open citations. What happened here is the fact that so going back to how I have represented citation so we have a citing article and a cited article. Yet in this case, and with all the process that we have done so far, we use the DOI, the DOI identifier of the citing article and decided article. It's not always the case. So, with cross wrap that was possible because we have queried and we got the data that provide the that have do I fires. And yet, some other articles publication could lack this this information, and the idea here is to have an internal mechanism, which, which can, let's say, identify the publication, not only based on the device but also based on other identifiers for instance for PubMed IDs, maybe. So, and in this kind of situation we will also have one entity which can describe or which can point to to all the other duplicated entities in all the other resources. This is basically the idea that involves the the production of open citation data. And this can also of course improve the API performance of open citations since, in this case we will not, we will not get any other data from external services. We will get everything from the open citations meta data set. And it will enable also some text search because we will have some metadata regarding the citing and the cited article so for instance we can search through the title or through other textual feeds. And we will get all the data collected by the end of this year. Then, about the indexes. So, with the main index which is cookie, we release a dump of cookie every two months, and the, the next dump will be released actually, I think, in some days. It is already ready to be released, let's say, and then we will have to, to, to move on to two more indexes, which we called no key and don't key. So, the no key index is based on the citations of data site, while the no key index is based on the national institution of health citation collection. So, with the national institution institution of health, we have almost 730 million citations and these are based on public man, PubMed ideas, PubMed ideas. So, as I said before, in this case, open citation meta will be crucial to ingest this kind of citations. And then we have the citation of data site, which are 100s, almost 170 million citation, which are dotted by citations. So, these are basically, and they basically like the same, the same representation that we already handle with the citations of Crossref. Again, these are only numbers about the citation. This, this will not be added to the current because some of the citation we could, could be already ingested and have been already ingested open citation through Crossref. So, using the open citation meta we can de-duplicate the entities and we will add the additional entities only. So, for these two indexes also we expect to release them by the end of this year. Finally, we would like also to have a, let's say an HTML component to be added to any web page, which represent a page, a page widget of the citation. One can add this page on its own web page to display some informations, for instance, basically, these citations about an entity that is provided as a parameter. And this is also, again, expected by the end of this year. So, querying the data, how to query the data. I will go maybe a bit fast on this part because I would like to show you live via demo demonstration how to do that. So, we have some, we have the Sparkle endpoints, as I said, but this is basically maybe the most, let's say, hard part to use because it's the most technical also because to query the data using Sparkle, the Sparkle language, users need to have some technical background on the semantic technologies. So, this is an option. This could be, this is actually the most also for those who knows as Sparkle can be, this option can be very flexible so could, can have very potential adjustments and definition of the query itself, but yet it needs some technical background. And then we have the web-based search interfaces. So, this is basically the most, let's say, easy way to query and visualize directly the data. So, via web, through the web page of authentication, we can search and view the entities of the datasets and we can filter, sort and export such results. These are the, this software, so this search interface is based on a software that we have developed, which called Oscar. So, this is basically an example of software that we are currently, we have released and it's currently on GitHub and it could be reused. So, it could be freely reused for any other cases. And finally, the REST APIs. So, the REST APIs are, as I said, the statistic that I showed before, the one that are mostly used. So, this basically represents the convenience, let's say, access to the data that are included in open citations. This is, let's say, basically used by web developers, researchers, of course, and users who are not really maybe experts in the semantic web technologies. So, as for the search interfaces, here again, we have developed another software which is called Ramos. And the APIs have been developed using Ramos. So, again, anyone can take again the software, the most software, and reuse it to its own datasets if it is based on semantic web technologies and RDF representation. Okay, this is not the, let's say, a service to query, but it's important, let's say, because APIs could be used also by specifying the so-called access token. So, we really encourage people and users that want to use and query the data, the data of open citation to get this token before using the APIs. So, the APIs basically could be used also without specifying the token yet, and the token does not track any information about the user that want to get the token. So, this is basically an epoch string that we use to anonymously identify any unique user that uses open citation APIs. But it's not mandatory. This will help a lot the open citations and to basically get some stats about how many different users use the APIs of open citation. So, this is basically was a new addition to the services of open citation. And based on the September of this year, we have registered basically 12 different tokens that have made API requests to open citation. We have an average of 1414 calls per token. Currently, to this day, we have 36 different tokens registered. So, thank you a lot. I have basically just finished the presentation. If I have time, I can give you some brief demo. Let's say about this last part. So, if I can go further. Okay. Okay, so this is basically the open citation websites, the open citation website. If we go to query the data here, we basically, let's say, list the things that I have mentioned in the presentation. So, we have the Sparkle endpoints, the REST APIs, and the search interfaces. So, if we go to the Sparkle endpoint here, this is the Sparkle endpoint. Basically, it's a query language. One need to know what this, how to use this query language. And once you define the query, you can run the query. And here we have the results based on the query. So it's very flexible. For those who knows the Sparkle query language, it's very, yeah, it's very flexible and could be modified according to the use case to its own use case. So, this is the first option. And the second option are the search interface. So, I've already opened this one. So, basically, even from the, from here, from the searching box, we can search for a DOI. So, if we search for a DOI like this, okay. First, let's say user friendly way to get the data about the dissertation. So, basically, I have inserted a DOI. And here I get the list of all the citations to that DOI. So, if we go right to the reference, we will have the DOI, the DOI I have just specified here. And on the siting part, we will have all the citations to that DOI. We can, we can export these results. So if I click here, I can, I will export these results in CSV format. So, we can sort the results as we want. And we can also filter the results. So, basically, I can say I want only the citations that have been done on 2018 and it will show only that. So, if I go back here, I will get all the other results as before. The last type of mechanism to query the data, which is the most used one, is the REST APIs. So, here we have a page which describes how to use the APIs of open citations. And if we go to the operations, we have basically six main operations. The first one is called references. So, one can specify the DOI after references and get back as a result, all the citation where the DOI represents the siting entity. So, these are all the references of that particular article. Then we have citations. So, in this case, we give a DOI and we get all the citations of that DOI. So, as you can see, as I also said in the presentation, each citation is represented and characterized by these attributes. So, we have the OCHI, which is the identifier of the citation, the siting entity, the cited entity, which are both in DOI, which have both in DOI identifier. The creation of the citation, the time span between the application of the siting entity and the site entity. Whether it is a journal or a citation and whether it is an alpha cell citation. We can also access directly a citation by specifying the OCHI or the OCHI of the citation. And here we will go directly to query and to get that specific particular citation, metadata, so we get metadata. And again, here, since we do not have open citation metadata still released, this kind of API request needs to query external services. We are very confident that this API code will be very much improved once the open citation metadata will be released. We have a direct citation count and a reference count, so it will give you back a number about the DOI, so that particular article that is given as parameter, as an attribute to the operation. All the operations could be also, let's say, redefined, filtered. So we can basically, for instance, filter the results so we can say to once we have all the results to take only the citations that are, that have been done maybe after some date. Or any other things that involves the attribute of the citation itself. We can also decide how to sort the data, which format we want the data, and here basically we provide two basic formats. C is the format or the JSON format. And this one, which is maybe the most complex but one, but if we want to get back the data presented as JSON, we can also decide to redefine the representation of the data in a different way. Yeah, we can redefine, for instance, the list of the authors as a list of names or a list of only the surname, so we can decide whatever. So if I try, for instance, to call these citations of this DOI right here. Okay, so I have basically copy and pasted the API code in the browser. We will get this, we will get a list of the citations. So here this one is a citation. This is a second citation and so on. Since I have queried and asked open citations to give me back the citations of this particular DOI, we will find this value always in the, in the cited part of each citation. So as you can see here, it is always in the cited part of the citation. And the part that changes, of course, it decide it's deciding article. And so, all the others attributes so as journal, so citation and presentation and so on. Okay, so I think this is it, basically, one last thing I wanted to say is, of course, the APIs are not only planned to be used by the browser. But it could also be used inside code. So here, if you go to the page of the APIs, we have just mentioned a brief code part on how to use this, for instance, in Python. And yes, how to access the, how to query the data using the APIs, and also how to specify the token, if you want to do that. So the token can basically be specified during the API call only if it is done via code like this. Okay, I think I will leave a bit stage also if there is any question on anything I can go also further but I think I have said everything. Thank you. Thank you so much, even. Yes, I think it was very detailed. We have, we have, we have here one, one question and from Irina, who says thank you and very useful. If a university would like to check citations of words of its researchers, which of acquiring options, which you recommend for this task. Thank you. So, they want to get the citation based on the author names or, or is that the team is here so he can take you in. Yeah, maybe I can clarify. Yeah, that's there. The use for a university. So, what do you think would be the best approach for them to collect a list of do eyes, or the researchers, and then somehow query a batch of do eyes or orchid names from researchers, I don't know if you have any tips about that. Okay, okay, okay. Okay, so basically. If you have a list of do eyes, maybe, and you want to get the citations of all these do eyes. Okay, you can use the API calls. So as I shown here, you can basically also, if you want to do a small code, let's say, to, to, and to give each time. You can use the class parameter, the do I that is in the list. So we'll get, you will get the citations of each of these articles, which are identified by the eyes. Then, if this list is very huge and big and you need to do this every time. The only thing that we suggest is to download the dump, the dump, actual dump of open citation, and to use that one, because on the blog of open citation we have also we are we have provided let's say a tutorial on how to to to use directly the dump without the APIs, because maybe if you want to do this thing very, very often the it could be slowed down. But if you have the dump already saved by your side. This could be very much faster. And yeah, and regarding the orchid. If you actually have an orchid and you want to query based on the orchid. We are very like we look forward with the integration of open citation meta to go on this direction. Right now it's a bit more complex to do that. So basically citations could be could be retrieved based on the articles itself, not on the authors. So, but we are very, let's say a confident to do with to move to that particular step with the open citation meta integration, which is crucial right now. Because of such queries, basically, because yeah, and free text queries, of course. So I don't know if I answered some. Thanks a lot. Thank you. I would also like to, because my colleagues here, who are dealing with their work is more related to publications, and they are using along with this project, which open citations is one of the funders. They are already using it and integrating with a plugin. I shared the link in the, in the shot. And they are integrating the plugin in inside or just as to have them to have the citations in open journal system so I don't know if you want to talk a little bit about this because maybe there are some people who are interested in the initiative for open citations. Yes, yes, yes. Okay, because I work in that like Okay, okay. That's why. Yes, because my, my, my colleagues shared with me this link and I saw that open citations is already is also involved in this initiative. And that my colleagues are already using the plugin and inserting the citations into open journal system so. Yeah, yeah, yeah. That's basically the open the eye for OC basically is the initiative to, to open the citation in crime side crosser. So basically the fact that we have cookie, the main index of citations is because we have open citations inside crosser. So the initiative have, let's say, improved this, this reality. And that's why we can now have the citations inside open citations. So, so, so yeah, yeah, yeah, we are we are very much like part of the initiative and me myself I have worked with this like two days ago. Okay. You have here another comment. Will you in the future use ID for institutions like horde for retrieving citations from a particular institution, I think he didn't meant that. Ah, okay. That's, that could be very, yeah, that's to be a very interesting option to, to, to, to, to, to, to, yeah, to handle in the future and to consider in the future. We, we, yeah, we can surely discuss about this thing too, because we have, we basically need to move on getting the back citations based on any kind of input. The idea I think, and right now it's based only on the eyes, but we want to go to go always a step forward. So from the eyes to other identifiers to textual search, and so on. I think I don't hear you. You are muted. You are muted. Oh, I'm sorry. Sorry. Yes, we don't see any more questions here. I just shared a link for an evaluation form that I would like for the participants to have your opinion about this session. So if you kindly feel the form, I would appreciate it. And if anyone has more comments or suggestions to address to Ivan. So just as he is sharing links, it in. Is this the crossref linking and, or. I was wondering about this OHS use case, because that's also interesting. Yes. Or something else for the OHS. No, I don't know this one. No, I can check this but no, I don't know this one. I can share even if you have it. But I think in the website, you can see. And an article shared by Anastasia. Yeah. This was an article about open, open citations for. Okay. Okay. Yeah, yeah, yeah. Yes. Thank you. Thank you. Thank you. Okay. So if you can give, because I think it is nice. Once more. What wants to know more about this plugin and integration. So do you mean, you mean that the plugin that I'm using like for integrating the crossref data inside open citations. I was just wondering whether that's a plugin. Yeah, yeah, yeah. Or something else. No, no, it is, it is actually a software and it's all under here. The GitHub repository of open citation. So, if you go to here, and it's called index here, this one. I will share it with you here. Why I cannot share. So this repository right here, which is under the open citation organization is the software that we are currently We are currently running to ingest the data from crossref. If that's what you meant. And also for the OGS use case. Yeah. Yeah, yeah, yeah. To do anything else than the plugins that I posted. So what's what's a workflow for it? Okay, yeah. Yeah, we have on hypothesis. Opensate patients. Okay. We have the, maybe. Where did we put that we have a blog on the open site. Yeah, a post on the open citation blog. Maybe Kyara, do you have it because I think it is here also. But it's okay if you have a blog I'll just look. Yeah, yeah, I will share it with you ever because it's like blog post that we have written on how to to exactly to process the data based on the dump. Otherwise, if you want to process the data based on the API's, you can use the approach that I have, maybe described here briefly. Mm hmm. Yeah. Cheryl already posted a link. Yeah, exactly. On the data dump. So my second question was about which is separate questions. It's somehow a tutorial. Yeah, it's a tutorial about how to use the data dump, how to process the data dump. So you have to do that massively. We always like encourage people to download the dump and the dump basically, I can show you here. Wait, yeah, we're querying but not querying. I want the set. Okay. The year this one. Okay, so all the dumps are deposited on fixed share. And your dumps. So this is the page I can share it here with you. So these are all the dumps that are released every two months, let's say from for the indexes of open citation. And as you can see here, we can, you can download the data in CSV and triples, which is used for a semantic web representation, let's say, Scolics. And we have the provenance data, but basically the one that you need is this one, the citation data in CSV and the tutorial that is here. Sarah has shared the tutorial, I think this one, this link here, it will tell you how to basically query the dump and how to exactly query the CSV dump of the data. So the last CSV dump is 34 gigabytes. And you can keep it like this, it's 34 gigabytes zipped and compressed. And using the tutorial, you can basically query the data inside. Thanks so much. So I can show you here maybe tutorial. Yes. Yeah, it's this one, basically. So yeah, it's a tutorial on how to process cookie CSV dump with the compressor without decompressing it so you can, we will go through on how to download the dump itself, and how to process a step by step. After downloading how to work with the data, how to access. Yeah, you need a bit like, let's say the ground on here, we have done the tutorial on Python. So it's a bit of coding. And but if you don't want to code anything and you want really just to go further to get the data you can freely use the API and these are always available. Thanks a lot. Thank you. Thank you. Thank you so much. I don't know. We don't have more questions just I will, we will keep the chat and we will share with you by email also the because there's a there's a lot of links here that many people are interested in so. We still have some time. If someone wants to raise some question or comment to even so feel free to do it. If not, thank you so much, even thank you on behalf of open air. Thank you so much for for your availability and to be with us here and give us such detailed information about the service that is now integrated in open air via the research graph. And all the possibilities that's open along with the integration with different services in open air so thank you so much and thank you all for being here.