 Okay, so perfect. Welcome to this community call from the open-air provide, so targeting always the content providers, managers, those that are managing repositories that we have in our infrastructure, in open-air infrastructure, those that are managing different kinds of data sources that we aggregate content and we provide services. So it's always a pleasure. So we didn't have the community call in the last month, but we do this in the last first Wednesday of 2023, the last community call of this year, which is a great pleasure. The idea is just to provide you some of the latest developments, novelties regarding specifically functionalities from the dashboard, but also the other components that are related with your participation in the open-air infrastructure and the service itself as the provide service. So welcome all and I see that already more people are joining, so we'll progress fast with this introduction as we want to cover a topic and the novelties on the open-air graph. So some relevant updates that I think it's important for you to be aware and we have our colleague, Tanasis Verbal, is that we'll detail those updates. I'm not sure if and Claudio, sorry, will also join you, Tanasis. But so as Tanasis have a limited time, so we'll dedicate the first part of the meeting to this and then we have Dimitris Piarrakus to also to update us on the user statistics. There are some relevant updates about the reports that we would like also to share with you and then we can first discuss the first half with Tanasis about the graph and then Dimitris about the statistics and of course, so we are available to discuss any other issue that you have. Just two minutes to highlight some recent news. So we usually share with you what is in production in terms of our graph, so the index and stats update is from the 23rd of November. It's important to highlight that, so we want to do this update as you know every month, but we did it in 20 I think around 20 of September. We didn't have an update on October and now we have this update, so it's important to highlight this that it's not a one month update is in fact two months update, so it's relevant for you to be aware of this and in the coming days you will receive because this is a subsequent action, so you will for those that have notifications from the broker events, for example you receive those notifications. Important also to highlight some things related with the full text. It's always important to share with you that in the terms of use you can update the terms of use related with the metadata harvesting and with the collection of full text, but so we are now trying to improve the information available about what we collect and you can see in explore already the numbers, so if you go to let me share here an example that I opened just so in the can you see my screen yes you still see my screen okay great if you see the landing page of your data source you will see that now we have the information of the collected full text which I think is relevant for you to be aware of the number of records that we collect that you can see in the aggregation story and the collected full text that I think is important. It's quite relevant in terms of the results of some of the inference system that we have is quite relevant for example for the links to projects as you know some of your data sources we do those links mainly from the full text and not from the metadata. So if you have questions we can reply to those questions after the presentations and we can discuss a bit that and I will also just want to highlight as I have here to slide so this is what you have in you can see and we just want also to highlight that our intention is to put also this information available we don't have it yet so you can see it publicly available in explore but our aggregation is to improve the aggregation story that you see from your repository we also improve now what you see is the what we collect what we transform and what we and the version that is indexed but we want also to put the information available here to to have for you all the information gathering the same the same place so the the idea is here in the dashboard where in the tab aggregation history is where you you see this information is where also you see when was the the the content that is indexed from your repository and available in production in our in our explore service you can see that but our idea is to improve this please also check because andre together with other colleagues did a good work to put some more information here in the in this right area of information and help so where we explain what what we mean by aggregation stage collection mode etc and you can see also some more explanations in some useful links if you have doubts about what you see and how to interpret this this information i think you can have that quite well explained so this is one highlight that we want also to share with you the other one is just to quickly to remind you about the campaign that we did about onboarding data sources and research products in the yosk marketplace there is a webinar and there are relevant instructions for you so andre can also share these links with you here in the chat for just a reminder because we were pushing you that are part of open air too it's quite easy for you to be also onboarded in the yosk marketplace to be part of the research product catalog let's say and so it's it's it's you if you didn't have the opportunity to do that so you have instructions and the webinar to better explain you and also to let you know and we will have novelties about that about the guidelines working group in the coming year but the open air guidelines we are in a in a phase that we we want to to have it as a community driven global initiative so we set big kickoff this in the open repositories last tune in the in South Africa and we will start working in a in a in a working group not only from people from open air but a global working group representatives from different initiatives from different parts of the world to be part of this working group and to govern the the interoperability guidelines from from from open air okay and with these explanations in these novelties well i will come back to this report that we published but we intend to give this information at the end i give the floor to tannasis to present the novelties about the graph so those details that i presented right now please put it in the chat if you have doubts or open your microphone at the end because we have time to discuss to clarify some of these issues so tannasis the floor is yours to to to share the screen or to share the the website from from open air graph to okay i like some of the developments from from our graph to our colleagues here okay thanks Pedro for the introduction so i selected to present a couple of new developments regarding the graph some that are important ones and that are expecting expected to improve the coverage regarding some information that the graph already collects so before starting let me also give you a quick reminder on the whole workflow to produce the graph as you all know we get the information from the onboarded data sources based on the open air interoperability guidelines that you provide to open air through the open air provide service and we also include information from some instrumental data sources like crossref data site microsoft academic orchid etc we aggregate everything we also exploit information from pdfs and xml that contain full tags of publications to extract enrichments that can be included in the graph through mining and we also de-deplicate all the publications and all the research objects we further reach uh usc inference the records that we produce for example if we have the information that the particular product is relevant to an organization and we know that this is a department of another organization we also infer this connection and then we clean and finalize the graph production and as you'll know then this graph is available through the public graph data set and the public graph api the api is updated each time that we have a new version of the graph this is in most cases once in a month and the public draft data set graph data set is being updated once in each six months and these data are also used to provide the the required information for the open air added value service like the connect the opener explore the monitor the open science observatory and others and also for third-party services that are consuming the opener and the graph data to provide useful functionalities to various community at large and other related stakeholders and of course through these services we also get some user feedback that is also included inside the graph data and based on all this uh based on this workflow we are now able to produce a graph that contains metadata and interactions for more than 170 million publications and more than 59 million research data sets and more than three hundred thousand research software and also we are covering other research products millions of other research products so you already know about all this so I would like the first development that I would like to to discuss today is the one related to the production of the fields of science the fos classifications so what is this this is a classification of the publications of the research publications based on the field based on the discipline the topic that they are related to uh so uh open air has this collaboration with Athena research center and the research team there that they are producing uh classifications for the publications they are using metadata and the content of the publications to identify to reveal the most related topics the more the most related fields of science of each particular publication and they are giving these classifications to open air to be included inside the graph and of course this is a very valuable this is a valuable information in many applications because for some areas of application like for example research assessment it is important to know the field of particular publications until now we have about we have about 40 million publications that have been already classified with at least one such field and by the end of the year we expect to have classifications for more than 100 million publications now something important to mention is that these classes these are not just some topic names uh these are classes that come uh from a particular taxonomy the sign noble taxonomy that it is uh maintained and produced by this Athena research center uh team that I mentioned uh it is a taxonomy that has multiple levels uh in the top levels you can find fields that are pretty generic like for example in this case here medical and health sciences so there is a small number of generic fields of science that are covered from level one then in level two you dive into a more specified more generic fields like here the clinical medicine and as you go below that uh the topics become even more specific and for the methodology that is used to produce this taxonomy is based on well-established taxonomies for scientific fields and as we go to the lower levels the team that produces these classifications takes into consideration techniques like topic modeling and then they are working together with experts in the respective fields to provide some meaningful names to these fields uh so uh when we are uh saying that uh we are providing classifications for fos for different publications in the graph uh this means that uh we have these classifications in different levels uh so uh you can get the uh right now classifications and until level four if i'm mistaken and by the end of the year we expect to have uh classifications in more depth at least to some uh domains for which these levels are are fine grained and worked by the community um so this is the first uh the development that i would like to discuss uh the second one is relevant to the affiliation links that are included inside the graph so uh one information that it is again important in some domains uh is to identify connections between research products and organizations research organizations we call them affiliation links and although this information is very useful because for example if you don't have these links some of the opener services that are currently provided like the opener monitor cannot work properly the coverage is very important in these cases because if you don't have the link then if you want for example to get a glimpse on the productivity or the aggregated impact of a particular organization you are missing a lot of content if you don't have a large coverage in these affiliations uh so although this is a pretty important information it is not always present in large scholarly communication data sources like for example Crossref it is provided but not for all the entries and even in cases that this information is provided in in usually it's provided in the form of affiliation strings which are strings that are described in particular organizations something that it is not always easy to map to particular organization and to know for sure that everything is related to this organization on the other hand a lot of you may already know that there are some initiatives that are providing persistent identifiers to organizations for example the error initiative provides the error IDs and if you have the error IDs of the organizations that are linked to a particular research product that is a very valuable type of information this is exactly what we are trying to do for the graph so we are trying to identify this type of connections exact connections that we know for sure the which is the organization the linked organization to do so we follow multiple approaches and at the one hand we are using full text dx and xml that we are collecting from the publisher websites to extract affiliation strings and then map them to particular error IDs and on the other hand we are also gathering affiliation strings that are provided from Crossref and PubMed to do a similar job of course for this work to be useful it is very important to have a mapping algorithm between affiliation strings and error IDs that is pretty precise our engineers have developed a such algorithm we have preliminary evaluation results that show that the precision of the affiliation links is pretty high more than 90% and the recall is also more than 85% which is pretty large and based on this activity between August and November in the versions of the graph that we had released during this period we experienced an addition of 34.3 million affiliation links that come from PDF mining 25.8 million links from Crossref and 23.5 million links from PubMed of course there are overlaps between all these sources and of course apart from this we already had the affiliation links provided by Microsoft academic the problem is that Microsoft academic has been discontinued and it is important for us to follow such approaches like those that I discussed because otherwise we were going to have a small coverage on the affiliation links for the years after 2022 that Microsoft academic has been discontinued in general we now thanks to all these initiatives we now have inside the graph affiliation links for more than 79 million research products in total and of course this number is expected to become larger month by month because we are using new versions of Crossref and PubMed and because we are processing new PDF files and with that I would like to conclude great Tarnas is many thanks so this is our very good news about the graph that are really contributing to fill some gaps in this ecosystem so thank you very much so we have three or four minutes for questions please add your questions open your microphone or raise your hand or just put it in the shot in fact Pascal Dengis already asked a question are the fields of science also attributed to other objects than publications like projects or datasets I think for datasets yes but not for yes yes for some of the datasets but not for all of them I mean the large coverage is related to publications because the current approach the signable approach is focusing on publications so it uses also some information like for example citations and references of these publications the venues that they are published so they are using as much information as possible this does not mean that they are not producing also some of these classifications for datasets that have DOI but currently limitation of that approach is that it refers only to items that have connected DOIs in the future of course this may change but currently the majority of the products that I mentioned that they get a field of science are publications and in fact I think I don't think so I'm not sure about that but I don't think it's our intention to also to do the same for projects it's only to research outputs no this is the original idea yes but if you have the fields of science for the publications based on the publications that each project has you can infer the fields of science that is related to this particular project so if this is something useful we would consider applying some kind of inference there to do this okay any other question clarification needed about these three highlights that that Tarnassis gave about the graph always the visible part of the graph you can explore a bit in explore.opener.eu but of course you then you have the also the website that Andrea already shared here where you can understand better the workflows and all the novelties in the news Pascal are the SDGs also attributed now are still in beta okay yes they are I mean okay in theory where when we are providing the SDGs and the EFOS in explore we say that this is in beta but right now we can say that this process is pretty mature we have tested it for a long period for several months we have SDGs and EFOS for our products the coverage has increased also for the SDGs I don't remember by heart the numbers right now I selected to focus on EFOS but something similar is also true for the SDG classifications as well they are also being produced by the same team the team using of course a different classification algorithm but we still want to receive feedback directly from explore if there are anything that people want to report I remember that we asked that in the past for the fields of science and also for SDGs maybe SDGs is more relevant now but Silvia, Adriana and Tomescu what standards are important are important sorry not important imported and okay and Pascal so what standards I'm not sure if Silvia is asking about what we aggregate here if it is related with one of the updates if you want to standards regarding which aspect yes if you want to clarify so if you can expose the ontological base at the ontological foundation of the graph did you use for us for example friend of a friend or doubling core for linking data how we include the information in the graph in general or how we consume the field of science that or the affiliation links are you referring I think it's the basis of the graph what we have in the graph okay okay in the graph everything that is supported by the open air interoperability guidelines if we have repositories, publishers, journals, registries, aggregators, pre-systems that are following the guidelines that we have in place maybe someone can include here the link then if we have a such entity a such data source that is aligned to these standards then we can consume from them regarding the other the instrumental data sources we follow different approach because we include them to apply additional enrichments and to increase the coverage in particular aspects that are important for the graph we put some effort in developing our own workflows for integrating them and aggregating to the context that we get from the onboarded data sources okay thank you thank you so I think we can move now so when the Tanasis you are free because I know that you would you need to be free three minutes ago already you want to say something to finish no no thank you for the invitation and the policies that I couldn't make it for the whole meeting if there are any follow-up questions you can send me an email or you can forward it here and I think that Pedro and Andre will let me know thank you all for your interest yes thank you very much Tanasis and so feel free to ask more questions if you are not willing to reply we will ask Tanasis so Dimitris you can and I think you have slides also do you want if you want to you can open just a clarification because this is why we have these community calls for people to feel free to ask questions so we have always so experiences providers that's experience providers so we can always ask and clarify things so feel free to ask questions um this question that's Sylvia raised Sylvia Adriana raised it's uh so it's it's relevant just just a clarification and then Dimitris feel free to start and you can Andre already put the link for the guidelines so when we mentioned metadata interoperability guidelines for open air we didn't reinvent the wheel of course we rely on well established standards so double in core is is there if you if you see so alignments with data set with serif with control vocabularies from co-art and others so this is what we try to integrate in a in a in a comprehensive way in our guidelines and then ask those providers to expose the metadata uh compatible with our interoperability guidelines just to to clarify that okay Dimitris now so the idea was also to highlight things from the user statistics those that are benefiting from user statistics all those that can benefit and you can always come to the dashboard and enable this service Dimitris will follow up with you about this but so we have we will have we have some lovelies that are important that so we can share with you yeah thanks Pedro I will try to present the new development in the unit scouts service in order to incorporate to accommodate the the new counter release five version of this of which is the standard for the user statistics exchange so let me share my screen can you see my screen okay okay so first of all a quick recap about the usage count service what are its main features um the user statistics services the uh the users count services the user statistics service uh for open air uh research graph the research graph which is presented by thanasis in the previous uh talk um what the the services does is to count the usage events that are related to the items in the research graph the publications the data sets etc uh we are um we are offering two different um let's say workflows for counting for collecting usage events we have the push workflow which tracks users activity using a specialized server side software and we are also collecting counter reports already available counter reports in order to somehow incorporate them in our uh user statistics database we are anonymizing the ip's in order to to respect user privacies and we are also exploiting the metadata duplication of functionalities offered by open air research graph that uh which allows us to accumulate users for the same research output the service is compliant with a counter count of of practice and provides standard basic statistics currently we are uh uh supporting release we are compliant with uh counter release four and i will present how a service is compliant with release five and in general the use of the account service uh provides offers indicators that we consider that complement other traditional and alternative bibliometric indicators uh in order to provide um a comprehensive and most importantly a recent view of the of the impact of uh the academic uh resources so up to now up to now we have uh we are compliant with a counter code of practice list four metrics which are uh offering the well-known metrics views and downloads views for the metadata downloads for full text uh download and for the future we are we have developed new metric types new concepts and new reports in order to be compliant compliant with a counter code of practice release five so the counter code of practice list five metric types we are moving from views and downloads to uh investigations and requests um i don't know which of you are are uh uh are known these changes um for an investigation is is struck when a user performs any action in relation to a content item or a title for example investigation is uh uh happens when a view an item a user views an abstract in uh in a repository or a view an html full text or views a pdf or downloads a pdf or a view or views access an article and produce an article on the other side a request is specifically related to viewing or downloading the full content item so we have investigation we cover almost any activity in the um regarding an item and we have the request which covers the the download and the view of the full item content so the definitions of these metrics um we have the unique item investigation which count unique article investigation request inside a user session we have the total item investigation which count a total number of items information which is related to an article which has been viewed including all article uh full content views the total item request which count all article full content views uh uh across all formats for example html pdf jpeg etc and this is more equivalent to the release for metric type downloads and we have the unique item requests which which count unique article full content views in a given session uh regarding of the format so for example if a user views an article pdf and html in the same session this will only count as one um i will try to provide a an example scenario which is one of my favorites that somehow explains how this new uh metrics are calculated so we have susan who is researching the history of porto in a repository in a umino repository and uh she performs a search and from the list of search results she opens she decides to open three article abstracts so up to now we have the following counts we have three i total item investigations and three unique item investigations so susan after reading the abstracts decides to download the pdf for the two of of the articles so the counts uh regarding the release five of the counter code of practice change to five total item investigations we have three views and two downloads three unique views since we have three unique item investigations we have three unique views and we have two uh total item requests and two uh unique item um requests uh so how what is the rationale behind these metric types uh for the total item requests they are considered important for providers at hell that have full text content and report the number of full text downloads or views the total item investigations we consider that they provide a big picture perspective of the total number of investigations and as far as unique investigation and request concern they are considered a powerful metric for identifying activities uh within with unique items and titles and they are also offering a more accurate a most accurate for their most accurate for cost per user analysis and measure the performance of the of the data source the data types that are covered by the release five protocol are articles book book seven collections databases data sets journals multimedia platforms and uh repository items uh there are a number of reports that are uh offered by the counter uh release five but uh for now have implemented four of them actually there are three and one is the regarding the data sets reports which is provided by which has been uh defined by uh make that account the initiative so we have the report name which is the platform master report which is a report summarizing users activity for the repository uh by month a metric type for example i total item investigations total item request and by item type we have the platform users report with a more generic report with summarized users activity for the repository by month and uh broken down by metric type we have the platform master item report which is which comes from the release four with some minor changes which is used to report items request by month metric type item type and repository and we have the data set report which is similar to the platform master item report but only for for data sets so uh i have a plan to show you a demo for of these reports that have been developed in our beta uh provide the dashboards uh i don't know if you have any questions so far regarding this feel free feel free to to ask the questions i think it was quite uh clear and with an example so it's always the important interesting so feel free to to to give the example i think it was was great but if you have questions just open your microphone so this is a community call so you can you can interrupt and ask question or just feel free to uh put it in the chat i don't think we have in the chat but okay i think it's also relevant to highlight this this maturity of the of the counter code of practice that is becoming more mature and more relevant and i think this last update inversion is is relevant and so it's good to have available and already ready to be in production so feel free to to to to give the the demo okay i can start with um the oldest have been implemented in beta so far and we will move the production soon um so if you go if you have enabled the service you will see that there is a tabbed usage counts so you are you are only sharing the the slides so you need maybe to sorry sorry sorry to interrupt and start again okay okay sorry i also have it i have opened i have opened the minute but feel free to okay can you see my screen yes yes it's working okay so if you have enabled the service you can see from the provider's dashboard you can see the tab usage counts where you can have all the information from the that has been collected from your repository the number of downloads the number of views and some graphs and you can click here to get the statistics reports so uh in the production we have uh so far the release counter release four reports but in beta we have also added the release five reports that these are the four reports that i have mentioned in the the presentation the pr the pr at the underscore p1 the ir the dsr reports for so let's see the master platform report this is a report summarizing as i mentioned the the usage activity for the repository by banth metric type and item type so we have um you can here specify the range for example you can we can have uh up for this uh from uh uh January to October 2023 and you we can get the report it's a big report and we'll take some time i think it's important to highlight when when the results are loading and then the meters can explain that what you see now in production if you have access you see only counter four okay what we want is to have in the same page the counter five and the counter four and then at some point we will discard counter four but we will keep it both yes please so you see that this is the user statistics report for this particular uh platform uh master report you see the platform which is the university of mino this is the data type the article and the the access method is regular we can also have uh tdm but it's uh uh this will be explained in our um in the protocol and the results for example for January you have 41 000 total item requests the total number of investigations the unique item request and unique item investigation this is uh for this uh for all the articles that are available in this platform in the repository in the mino repository and for the uh period of um of 10 months and then we have the books a similar report for the books etc for this particular report you can also select to have information only for metric types uh if for the metric types that you want for example total item requests and unique item requests and you can also see again see the results here and the next report is the report summarizing users activity broken down by metric type for example again i will use the same period and you can see the total reports for uh for all items uh for this particular uh platform the name of the platform is the you mino unit repository total item request for January for February etc so um this is the report the item reports as mentioned provides the the statistics for all items or for items that can be selected for example if i go here i will select a period with uh for only one month in order to get a quick report but for all items yeah you can see here that you can this the report cannot be displayed in the screen so you have to download it but you can download it here let's put it here in the zip file so if i open it can you just explain a bit that that option why why some we can see and others we can so we need to download it i think there is just yes because explanation yes because this is a big report and it cannot be displayed properly in the in the uh the browser so we have decided to uh facilitate this yes and do you provide it as a as a json file which is also the format that is supported by the countercode of practice uh directive and this is a proper uh report that can be downloaded and maybe you can process it in your premise in your side so every every this explanation is important every time that we run a query here that the the the report is yes yes yes yes okay no no no we cannot see you cannot see i cannot see because you are okay so we can see the button download report because you are not sharing the entire the entire desktop yes okay let me share okay so every time that so the so the meter is just just an explanation so about this download and see it in the in the in the browser so it's not for for the type it's for the volume of yes yes yes every time that the volume is is big so we so by default we it creates a download button yes yes it is offered by in order you can download it using the button yes can you see now yes yes yes okay this is the report yeah this is the json file that it can it's a proper json file that can be used in order to process the report okay perfect okay uh i can also provide an example of of it uh and if you if you select a specific identifier specific publication public yes but you are not finding data let's maybe the year yeah okay but so so we did we put this in the beta we did the testing i think we are quite happy with the tests so if any of you is is available so the issue here is that not so we are not only have a small number of providers the repositories that are sorry again this requires a download if i okay if i download the report there's an item report you see that in the zip file it's 66 megabytes so it's difficult to present it here so yes okay so this is more or less a similar report is from from data sets um let's see if we can have some results yeah this is the report from data sets we have the data set title the year of application the access method and the results that another title the year of application and some other results this is only for a two for a one year one month period that's why we have this uh this short list let's say so uh this is more or less what we have developed for the counter list five feel free to to join the beta provide the dashboard and play with this and if you have any uh issues please report it to us thank you very much so i was just giving that explanation that we we put it in better we are happy with the tests we made some tests with some of the data sources the limitation here for some of you is that we don't have um we only have a small amount of providers that are in better okay because we we cannot have a fully better thing for testing but we are quite happy and um so we will put that in production as soon as possible so we we want you to do it before the end of the year but i i envisage some limitations because december sometimes is short usually is short so but for sure in january we will have everything in production and in fact it's good because you can explore a bit the results of 2023 already using this this new reports i think it will be quite quite useful for you if you didn't have access if you don't have access to the user statistics you can always enable and work with us in order to ensure that you you can proper um facilitate this service for you okay and feel free to to ask questions and so because we we we can provide you support just to finish because we have only three minutes more um and so i just want to to highlight to you this this report that in fact was released yesterday some of the numbers in fact we discussed it a bit in in in september in the general assembly of open air but in but then the the the authors of this report just um put some more effort to to proper presented the and explore and analyze the results and the they yesterday uh open air and coir start the and the and the river and and spark uh europe start uh disseminating the this report so feel free to access the doi uh available in zinodo the public the report available in zinodo and also a new item that was released and re is also putting that in the shot quite quite interesting results i think it's important for us to explore this some of you may have answered this survey that the survey that was used for this report uh uh please read it uh i i have at least for me i i i had some surprises others i have some confirmations some things that are that prove the value of of of repositories some other things that prove that we need in we need to work uh more uh to to uh to increase the role and to to to improve the role that repositories plays in the in the scholarly communication ecosystem so let's work together on this our idea and after this release and feel free to explore and to ask questions our idea is that the upcoming call the community call we dedicate half of the community call to discuss a bit the results we will invite them at least Eloy Rodriguez from University of Minho our colleague here it's one of the authors from open air and from kowar is available to discuss with us the the report in the coming up in the upcoming community call so feel free to access the the the report and discuss it with your colleagues and then your countries i think they are relevant so um we will continue to try to have every every month the community call in the first Wednesday of the month we will follow this uh we don't we will not do it in the first Wednesday of January in the fact the first Wednesday of of the year but we propose to do it on the 10th of January so the the upcoming two uh we will schedule you will see it in this page open air dot you provide if an community calls and you can access the the at least we will schedule until until June and feel free to put you in your calendars so we are waiting for you on the 10th of January where we provide some novelties on the for the dashboard some recent new functionalities in the in the dashboard and and we discuss the report that i just highlighted here from the current state and the future directions for open repositories in europe this quite relevant report that we can discuss in this in this first community call from 2024 subscribe the newsletter if you didn't subscribe we try always to send the newsletter just one or two days before the community call and put it in your calendar feel free so we will try to inform you the first week of the month of the year about the the calendar of the of the community calls so and we are coming to to the end thank you very much i'm not sure if there are any other questions here in the chat if yes we can reply in one or two minutes so recordings are available slides also yes antennas please feel free to ask we have some hello thank you for very interesting novelties but i have other questions yes please i would like to ask about my very interesting very useful ticket i sent yesterday a reminder about about the ticket to Leonidas and andrei okay okay i see that now these are guys participating in this meeting and my question then we will receive answer with my colleague because we have been writing cancer from since june i june 16 okay about the aggregation of your i'm not sure if andrei have some novelties or this is important so you can always put the question so we we have a support system in place we are we also have some changes between it's important to say that antennas between the summer and now we change the the support team for the aggregation the aggregation team this is why we may have some delays other things we don't have delays for some of the things about the aggregation we may may have do you can you remind me me your the name of your repository i know that is from uh i'm looking at your ticket right now we will reply to you fine fine fine if you reply we will communicate with you and thank you for yes we try with rate with this is very quickly that's probably okay okay okay great we try to do our best so thank you for to raise that issue so if there any if you found any other missing communication etc just just ask us because we can we can we can try to do our best so antennas thank you very much so it's but but remind us about the name because people can check your repository so it's uh from from Lithuania it is very uh simple uh name Lithuanian international databases yes Lithuanian international database great thank you very much thank you for joining the community call uh if you don't have any other uh comment or request also feel free to contact us we try to do our best um to reply quickly when we found the more critical issues in terms of aggregation etc so sometimes we don't have we cannot reply quickly but we have already improved our aggregation team and i hope that we can we don't need to have delays in the future thank you very much thank you all bye bye have a nice Christmas