 joining on time and thank you for participating in one more open air provide community call. This is always an opportunity for us to share recent developments, to share novelties from open air services, those that specifically those that target our providers, repository managers, publishers, system managers, not repository and archive managers. So it's a pleasure to welcome you all and it's a pleasure always to organize these monthly calls. It's also important for us to collect feedback and to receive your feedback. So feel free to share in the chat, feel free to ask questions and to to put some questions even if they are not related with the main topic today. I will just share some updates, some information and then give the floor to Paolo Manghi, that is in fact currently the Open Air CTO in charge of the technical developments in coordination of open air infrastructure. So the main idea as we promised in previous calls is to give you an overview about the way that open air is contributing to the development of YOSC and the way that in fact you being part of open air contributing with content to open air infrastructure can also be part of the development of the YOSC. Paolo will try to clarify that and you have the opportunity here to clarify and adopt that you have to ask questions to Paolo as Paolo is really active in having an important role on the definition of the YOSC architecture so I think this is a great opportunity for you. So as you know even if you don't if you need to ask him to clarify something related with other components of the provide service you can ask at the end so we have time for that but we have this main topic of our call today. Notes and agenda are always available so Michael from the National University will share here the link but you know that you can ask questions in the chat or even use the minute document to put questions and to share some to ask some questions. So three just informations for you to know that we are preparing an interesting development in the validation process in provide so it's related with the fair assessment you will have an opportunity to test your content provider against procedures of fair assessment we will have it soon we want to present it to you in the coming call in June so I hope that we can do it and if we are not able to do it entirely sharing and testing directly in the tool in the validator tool for sure we can present what we are preparing so be aware of that in the first Wednesday of June we can present you the development of this fair assessment tool that we are putting available in our validator in provide then be aware I think this was also a news item in our newsletter that we sent out yesterday about the the directory of research information systems that is available and was produced and made available via Eurocrease. Eurocrease had a partnership with open air and open air have sponsored in open air advance project a project to develop an API to integrate Dries this directory in the open air services specifically what we are preparing is that the registration of Cree systems will use the directory developed by Eurocrease this Dries directory so we can also present in upcoming calls the development so be aware of that so soon in provide dashboard for the registration you will have a part to register Cree systems and you must be registered in this directory that will be the alternative directory for the registration in open air and then you can register in open air a reminder be aware and I think we have already presented in previous calls and the details of this component about the metadata enrichments be aware that now to receive the full list of events metadata enrichments that we have available for you you need to subscribe for the topics be aware of that okay you can check the information in the enrichment part but but in order for you to to receive a full list of metadata events you need to subscribe so we only present a sample of 100 events so we already detailed and demonstrate that in a previous call two or three calls though so you can please be aware of that don't forget to to subscribe we are sending in fact in a without any delays these subscriptions for you so be aware of that and so I am talking about previous calls so all our recordings so we have different demos all our recordings are available in the community calls web page so different presentations different demos even some calls where we presented some recent developments integrating the broker API in this space Cree for example that we had to two calls ago so you can see those recordings I think it's always important to for you to to if you are not able to join the call to to know that we record these calls so these are the recent novelties or the reminders that I want to share with you hope they were useful if you have any questions I'm checking the shot there are no questions so I give the four four to Paulo mangi to present to you the way that open air is being part of the of the development so we have contributed to to the definition of the architecture of yosk paul is was quite active on that part so paulo I know I can see that you already can share the screen so the floor is yours and I hope this will be benefit so I know that paulo is happy to reply to your questions if you want to use the audio you can do it if you want to feel free okay so thank you very much Paul for joining in for contributing so please feel free to ask questions maybe you can write them and if federal find one that requires immediate answer answering then you can interrupt me so because I won't be able to read of course while I'm presenting so um so I would like to start from a general overview of what we are providing today because that will clarify what the picture and get with the one but the idea is that today we are offering a console outside that use the services that you see here and organize according to three main silo says publishing the monitoring the discover and underlining all this we have a service that is the open air resource graph which is powered by the open air provide that I think you're all familiar with and offers its content to third parties through the develop that is at the bottom as you can see so the open and develop is basically the set of apis that we offer together with the dance data dance to start party to access our services while the provide is a sort of gateway for the content providers to on the one hand verify their compliance to the guidelines as they have themselves defined as a community of open air so that their content can be collected and made part of the graph and as you all know offering this graph the content through different portals that you can see here on top right so the monitor and the discover especially as you can see from the from the drawings are counting on the graph to deliver its content to different stakeholders so we have the monitoring aspects the statistics that can be built out of the graph we also have the discovery aspect which we are customizing with respect to specific communities research communities and we are now starting to customize with respect to a geographical point of view so regions geographical regions all this is done thanks to the fact that we have reached the graph via several tools which are not going to describe today but characterize its subjects based on the research flavor they have and the geographical position let's say of the affiliation or the underlying the underlying organizations behind the research product for example okay so the services on the left the publish are different ways to that people can different services that offer different ways to publish their content through the known workflows that we all know about so argue some niches you know the besides as if you want to know more ask but they're not part of today's discussion particularly so starting from this which is quite important I think as a starting point we can go deeper into our relationship with the use so as you can see keep in mind well whatever I'm going to present afterwards that we are trying to match complete fill the gap compensate the services digital services that are needed to support the research life cycle in general okay so that's very important and we do this by tracking collecting information and also by collecting this information now open air and they use so open air was there before they use that's something that we would like to highlight every time we presented and the idea behind this statement which seems obvious is to stress the fact that we were somehow doing already a lot of the work that they use is willing to do in our context and other context as well so we have engaged as open air with hundreds of different stakeholders in the US domain what we call today the US domain is a scholarly communication in Europe so here you find some of the labels some of the names and the brands only some of those with which we actually sat down at the table right so each label here required one two three four meetings trying to understand a line and find agreement to establish future directions and so on how to deliver content how to expose content how to make sure things are somehow interoperable right and not just to facilitate open air because open air here is plain as a facilitator but to facilitate the whole scholarly communication domain so we first first first notice is uh interoperability we have been working towards an interoperability for a long time uh the guidelines are a clear example but there are several other things we're working on for example the data source profiles score leaks behavior practices on the right hand you see the the communities the research communities these are the research infrastructures we have we are doing business with okay so and here again we are trying to focus on the practices so try to understand what is open science in their domain what is the subject of publishing in their domain what is reproducibility trying to track whatever they publish and make it uh part of the graph in a wider context where several communities are are regarded right so we need to somehow uniform this naming the name the behavior the profiles so we've been there for a long time trying to do this then we started working in the US secretariat and in in hands which are shown at the bottom now and we continued some of these activities in the uskin hands especially we are pushing for the guidelines we're trying to uh have the guidelines as a starting point for describing research products in the US of course as a starting point because the US case yet again the community broader than the one we have from which we should collect information change the way we describe and behave and so on but starting from these great results that the guidelines are also thanks to you for providing this content at the same time we are trying as we found out that this is this is an issue we are trying to identify common ways to describe data sources across domains across disciplines and across types of data sources right as you can see and you can also witness today through the portal that we have we've grown up the classification of data sources in open air that is confusing right so we need something that is simpler and most importantly accepted by the community and here again they use place an important role because whatever we try to do as open air as a community trying to involve all possible communities core is one example and with the years we are reaching out to more yeah more stakeholders so the endorsement is going to be even stronger so as open air since we are trained to do so we now learn how to build things together as a community and endorse them get engaged with others we are trying to transfer this knowledge and the US Secretariat gave us an opportunity to do this via the working groups where we could participate to the fair definition the architecture definition the PID PID infrastructure and so on so many from open air on the technical side but also from other perspectives of course the working and the policies were part of this very interesting I think design process that took over and today we are in this future so use future is inheriting all the results so all the outcomes of your skinans your secretariat and of course before that open air and they're trying to continue this work of consolidation definition around the interoperability frameworks that are to be used in the US and trying to define a set of core services digital services that can actually enact the use so give life to the use but the use is not just a number of spiritually related organizations it's also a number of digital services a number of processes that regulate sub-services and a way to stimulate cross-discipline interoperability so for that this picture tries to clarify a little what's going on the use core so the picture actually to be clear represents the digital services okay so anything that is a service digitally provided by organizations in Europe and in some cases beyond because the US must count of course on services that are not necessarily in Europe okay mainly PID authorities okay they must be part of the use so the use core is intended as the minimal set of services required to make the use exist typically in a service oriented architecture this is the the registry so it's something that keep track of the map of entities or resources which we supposedly identified as those creating the use generating the use so the use core isn't as the number of such services for example AI and together with policies together with practices because all services should try to comply to it the catalogues that we are defining so the catalogues are examples of the service use service catalogues which you've certainly heard about but during the studies we identified the need also for a research product catalogue which luckily was provided by OpenAir which is the graph which is nothing but trying to deliver what you guys are storing on your side and providing to the world okay so the use core is what is essential the rest the use exchange are is the set of services that are compliant to the minimal participation rules rules of participation of the use we'll come to that in a second so any service that it leaves wherever under an organization on dry line europe that registers so provides a profile for the service in the use case but implicitly part of the exchange so it can be found by consumers that can be contacted via the use then we have the use federation which is the rest what is out there it's part of research infrastructures and the infrastructure is not yet registered to the use and then we have the research innovation community so everything anything everything the world the rest okay now to be clear the rules of participation of the use are very flexible the simple principle is that the registration participation to the use should have a zero cost almost zero cost so the registration of the service to provide on the other hand several levels of engagement are provided to the use so the more you engage the more you get back in function of this so raising the cost of the integration is up to the service provider based on the opportunities that these will get out of it okay examples are the monitoring the accounting services which are basically services provided by the core to collect the user statistics to collect to make sure the service exists for example deeper description of the services which for example and now is discoverability through the registry needs to be used okay so these are the things that we're trying to do and again not as mandatory but based on opportunities so the idea of the use future project is to deliver services that should be appealing should make a difference should go beyond what today the research infrastructures can do okay so across all the services of course we have the publications the data the software that are the products and the outcomes the input and the output of the services and of science in general and this can be found anywhere in the use right so the use core level exchange federation and community and of course when service is part of the exchange we expect the if it's a data source it's content to be available to the use core catalyst so here we are trying therefore to draft what is what it means for a data source like a repository an institutional repository or a data repository to be part of the exchange it can mean several things and we need to design define these things the MV is a very high level concept but it's the idea the minimal viable use case the is the idea of saying in any moment in time you can define the set of services that makes make the use essential necessary okay and these set of services can broaden beyond the use core some of the services out there will become essential slowly to the point that you have to ask yourself should this be part of the core at some point this is very typical in the process of end users right when google started some of the features that came out were just fancy new features that are now things that we cannot do without right so and they became part of the essential before they were fancy capabilities and today is something we cannot do without and a very similar principle will apply here I believe so when we look at open air where is open air plugging into the use square we have four services that are core so four services will be part of the core which is a very important results and also responsibility in a way the rest of the services are all registered and part of the exchange so any every service that we have is part of the exchange as you can see the research ground is part of the core the open air use statistics which is part of the monitoring and the counting in a way so we are trying to measure the statistics related with the publications data and software so the research products and overall of the data sources that are stored then we have the open science observatory which will offer statistics indicators fairness openness etc with respect to the combinations of all publications data and software okay and that's very important and finally we have provided I think this is probably one of the most interesting thing and it's actually one that is very close to you and to your interest so this picture maybe shows it better but the idea is that if you want to register service to the use you are going today to the your service catalogue which is there it's something that you can reach from the use portal you can register profile and so on but as an open air we'll do something more so we'll make sure that when a service like a data source is registered to open air is somehow compliant to the use so we'll make sure that the profile of these repositories the digital repository will be fed to the use this means that our modeling of services of data sources will match the one of the use and we'll make sure that whatever the data source does as an effort to be compliant to the guidelines that we are going to expose will be revealed also to the use so we are willing basically to use provide as a way to verify the quality the compliance level of the data source and the underlying resource products to what they use will define as guidelines for those which as I mentioned before I will start from the open air guidelines so this is good I think result but this may change depending on the requirements set by the communities to which by the way you are part of of which you're part of okay so the idea is that a data source that is compliant to the open air guidelines is the fact of compliant is research products are the fact of compliant to the use guidelines this will be the starting point and we will evolve from there as we are doing today already for our guidelines okay so changes may take place it's just this decision-making will be taken in wider groups where the whole community is again invited to participate where open air will play the facilitator role course as a result of this the all the resource products will be visible and made visible to the use of the graph this means that the open air the used graph the the use portal will offer search facilities discovery facilities browsing statistics everything based on the open air resource graph APIs okay so they will connect to the API of the graph to provide something that is similar to what we are providing as explore similar to what we're providing maybe for the communities but tailored to the interest of the use portal users so they will mess their own perspective this means again that we as open air resource graph providers may need to enrich this information further we'll need to customize some of the subject fields or whatever that is needed and driven by extra the requirements that we come from the from the from the broader community as a first result which was we which I think is very interesting the open air research graph will include services the use services of which the data sources are subset so from the explorer for example you'll be able to appreciate searches that find services in the use okay in general and the relationships of the services with the products or the services with the organizations will do a lot of very interesting stuff right on mining by mining the graph finding new relationships etc a new new statistics new indicators may arise from this very similar is the open science observatory integration which we believe is going to be one of the essential services there and the open air user statistics now the open air user statistics is again quite important service because in the context of the years future we are willing to broaden its adoption that is code let's say of usage to data repositories so if today the usage counts basically focuses on the on institutional repositories tomorrow we make a distinction again this is not clear because many of the repositories are already hybrid so we provide data sets and maybe many of the events that we are delivered are already relative to data sets so we'll probably have something that will look like an overlay on top of it making sure that we have more sources providing the events to us and that's interesting because we are planning to make the statistics downloads the views as part of the elements in the graph to enable searches discovery that is based also on that right no we have to think about it but there's plenty of ideas so we may have surveys me and we may have indicators or the effective download and uses to us to describe now and this is the last slide and this shows in EOS filter the current status of the architecture as we are willing to provide it okay in EOS filter we are not creating something new but we are gluing the results of open air the results of the use of the results of the use can hence to provide the version one of the EOS core that I mentioned before so going back is this guy here in the center the blue issue okay the first version so trying to match the requirements identified by the EOS architecture working group so as you can see the portal here is splitting two parts is one portal which will offer some functionalities for the consumers and some for the providers the consumers of EOS resources providers of EOS resources so a provider is an entity delivering the service delivering the research product to the EOS the consumer is an entity who's willing to consume it so to find access it we use it where possible the two views the two ways ways to access resources are characterized of course by different functions so as a provider I'm interested in in a help desk because I want to understand what I have to do and I'm interested in anything that is a resource management functionality so onboarding profiling editing and monitoring so checking the results of my service how many events regard my service in different sources let's say different kinds of garnishing as a consumer instead I have a different perspective so yes I still need to help that but then I want to search I want to find I want to access I want to be able to compose resources right so compose them take one resource and do something with another resource and in a transparent way at the highway and as a separate note I would like to access statistics about science in general of course as part of the consumer portal we will have the training catalog we will have the open science help desk and so on so here we're just focusing on the subset of services of work package four and five in your future so the strict technical challenges so more informative aspects are dealt with in another packages sorry I opened the closer parenthesis but that's to be clear now if you take a look at the center of this picture you have the services with the logos that are the ones that we provide the services without that are once provided by the use company the service registry is the one that I mentioned before and it's a registry of services this is an aggregator not only a place where you can manually include the metadata of your service but it's also an aggregator of catalogs like the open air ones we wish we have SLAs catalogs that are dealing with profiles service profiles at the research infrastructure level that expose the profiles according to the eosk language for service profiles which is called service description template defined by the eosk inans project and therefore the service the service plays this important role of having one map of the existing services in Europe around 700 if I remember well okay the dashboards and the monitoring accounting are somehow linked to the service registry because the dashboards are the only views that over they say the ui's that are needed to access the monitoring the accounting statistics relative to services okay some of the services are providing the statistics already to the to the core and these are limited to the marketplace the service registry itself the portal itself during eosk future we'll start a campaign trying to offer these services to services in the exchange there's open air we're willing to participate so we're already collecting kpis monitoring our staff and we will certainly integrate with the monitoring and accounting of the eosk to offer our data to make it visible transparently to the world to show off how good our services are doing the marketplace is another interesting product I think it's as we see today is probably very basic but I think it will in the future play a major role so the marketplace allows consumers to find the service and to find the instruction on how to use it not only it offers a tool that is called a bundle so a bundle is you can see it as a group of services that are together because together they can offer a cool functionality and you as a user can buy a bundle when you buy a bundle it means that you need these topology of services so this combination of services according to your specific needs so services may be computing and special service deployed on this computing and maybe some input data okay and you may establish also the amount of computing that you want or which kind of entities you would like to have as input data and so on that's for the future and when you buy the bundle all the service providers involved with the bundle are contacted and together with the consumers they can establish a solution in some cases the solution can be delivered on demand and that's the interesting part that's the question that we call it the composability of the resources and there's open air we are doing some interesting experimentations here which regard the data repositories the institutional repositories and so on which are based on the simple idea that if we establish common methods to deposit objects into repositories sort like I'm sure many of you are familiar with this framework and if we register our data sources to the use to specify what is necessary to deposit according to these protocols then we may have scenarios where consumers can find these services through the use and have depositing workflows that are based on discovery so essentially by offering to the users the possibility to pick service or work to deposit by third-party applications and this is one of the things we're working on in open air nexus not going into the detail of this but you're all welcome to participate so the marketplace has a great potential I think that goes beyond and of course it works on the service registry where it fishes the material from and in the future it will also fish from the open air resource graph as you can see you can tell by the arrow there right finally there's a special task on AI so artificial intelligence for discovery that was part of the call so in years future we were asked requested to address this specific task and the idea is to not only to inspect the graph but thanks to deep learning techniques to improve discovery but also to collect information about AI from the original sources this means that this means that we may have something that is similar to usage counts in a way so trying to collect actions that are related to behavior of users or understanding of users from the original repositories when this is possible and to collect this information to try to exploit it centrally from the export perspective when I use we uh we don't mean that open air is always necessarily involved in these activities although we are quite pervasive in these two work packages because the graph puts us in a central position as we will acquire several data from different services the help desk it's offered in two factions eoscore so help that is dedicated to explain what the eoscore services are how to use them and so on questions that may use any other sort of things for both consumers and providers but then there's also an option to provide the help desk as a service that is for service providers who are not offering today help desk and they can do it through the use okay so you can make a request get your help desk and start interacting with your users your providers through the structured mechanism great that's the finally an interesting development I think apologies somebody's giving me the house is that we are moving the open air guidelines as I mentioned before as a network um in a quite central position with respect to the overall use interpretability so the use I have so-called is the place where the communities can discuss debate agree on existing standards uh or defining new ones if they're not matching their expectations but describing them in a uniform in a uniform manner this means that protocols will be provided with a persistent identifier relative metadata description and so on so the guidance would be one of those would be adopted by the eoscore so as identified for the description of research products and these will as I mentioned before open a gateway the through the use via open air okay and the guidance if you think about it are a way to uniform behavior in scholarly communication in general so we're trying to describe our objects in a common way that's very fdo like is very interoperability framework type the problem is that when we're doing this in several circumstances this may require an effort from the community to buy to buy in what we're proposing uh for institutional repositories it's been very straightforward this is something that we're used to because their uh level of integration in scholarly communication is at the scholarly communication level not at the specific discipline but data repositories it's a different story because they often target metadata formats models that satisfy their specific users the communities when they need to adhere to the open air guidelines they have to spend money resources invest in this to expose their data to the data side to the open air guide and so because in many cases they're not data side compliant okay um these reviews again right it's not something that we haven't done this at all but it is the fact that we're missing really a part that the use somehow uh reminds us of okay so the idea is to move towards uh resort communities by broadening our open air guidelines and basically saying that for some communities which are mature for example whose data sources are already exposing uh uniformly so they may this effort the metadata through specific protocols to somehow include them as uh endorse their metadata format as open air guidelines as eos guidelines an example is the uh elixir guidelines for bio schemas okay so we're that's what we are doing now as an experiment the idea is to make sure that open airs goes towards the data sources rather than vice versa in a good balance of course in the trade-off of effort because we cannot do it at the specific proprietary format level we need a community to offer a common uh understanding of what research products are like the elixir in this case so in this case we are including a new set of guidelines which are very very much like which is uh research community specific similar actions can be done in other contexts this means that open air takes from more the effort of mapping local guidelines which are uniform across several sources and map them into the open air growth okay uh that's a I think something that we started with the use right and I like that because the use gave us the opportunity to think more about these chances and that's where we're moving next okay uh this work has been today uh performed in the context of the asking ads and uh I'm done so if you want to ask me any questions I'd be pleased to answer it's a lot of stuff I know so there are many things Paul there are already some some questions in the chat there is one that is not so as as John said it is not fully relevant for this forum but we can also clarify where the the open research Europe platform fit into the the ecosystem so it's okay well the uh platform is the data source so it's one of the services will be registered to the use exchange as we called it and we are working towards making it compatible to the open air guidelines so they're trying to expose the metadata according to the guidelines and uh it will be it will happen this year so this is their first basic action to do we are have uh sat down with them already a couple of times in fact yeah there are my more free other questions you can you can read it is okay it's version four of the literature positive guidance live uh that's better but you can read the others is version four of the literature is yes Indiana has two good questions the first one the answer is yes this is when the two services will be fully aligned and integrated so this will take some time on the so this means that open air will change the internal representation of data sources to match the one that is being defined for videos which uh I together with others are defining with mark uh from the Sunday forum you that are defining uh with the endorsement of briefly data fair sharing which are taking uh a look on what we have giving us feedback because they will themselves expose data according to these common profiles the second question is the open science observatory is based on open air data or yes it is and what we hope is that we collect further data in context of your scooter uh which we for example the monitoring data about the services right so anything that can we can use to provide you interesting fancy indicator open science just to reply to you thank you very much for your question about the literature to the version four of the literature guidelines and specifically the validator we had a period with some issues um validating the OAPMH interfaces and for a short period put it not in use I thought that this was already solved if it is in I was I was trying to check if it is there not in use uh should not be there so the the validator is working okay sometimes we have some issues with many try to test the repositories with a number of records but it's so we don't have a common issue so we are not sure about what is what is behind that that problem that we are having because usually we don't have it but we we we have we receive we we some of our users reported some issues that we are trying to solve but so yeah the validator is working for the literature guidelines I'm mostly checking here with with Andreas so we thought that not in use was already not there but maybe it is so thank you for providing that and we will solve it yeah thank you so and then this question was also important for Alexander and Bianchi so Alexander and Bianchi sorry for that so be aware that the the validator is working well so you can use it okay so feel free to to turn on your microphone if you want to share any thoughts or ask questions so you don't need to sometimes it's more difficult to write it but so Julia Julia is also asking something here a lot of things for your talks do you think that the research assessment based on open science will use a specific provider for the whole system okay this is a more difficult question but but this is one more contribution we can say that for sure our systems and the and the repositories are a contribution for the yes so taking into account that several units several of the G's are already taking our data in consideration so your data in consideration for the for devaluation okay so they're pushing a lot to use open data rather than going to the scope was to the web of science etc so and we are trying to clean it up as much as possible because that is the issue right so we already have several rounds of conversations with them so if by research assessment you mean the one of the commission yes this is already taking place and using use resources because the open air catalog is is a new resource yeah for that component of the European Commission yes I'm assuming that the research assessment is more broader than just that so also for institutions but this is a this is a journey so open science is a journey and for sure the tools that we we have we need to prove their value and the their value their role also for the new ways of assess and contribution for new metrics yeah they are for sure a contribution for this journey open science journey let's see I hope that we can we were able to reply to your questions so feel free to ask questions if you want so many things to follow I think it was quite important to have this this overview we can we can finish if you want to ask any other thing you can can do it in the coming meeting we have one one more minute and then follow me to run to another to another presentation so and anyway I'm I'm of course more than available via email so if you want to know more just feel free to write me no but we can we can we can close we are coming to a hand so I have only two slides and then I can present it and if we have any question you can ask and then we close so upcoming calls we have two calls schedule for June in July for sure we'll do it in June then we will going to decide if we can do it in July or not maybe not but in June but they are scheduled but for sure we'll have it in the second of June the first Wednesday of the month notes and recordings will be made available subscribe the newsletter or disseminate it in your institution or in your in your countries for those that you think that they can benefit from receiving the information about the provide the open air provide the services functionalities features etc so we are sending every month the newsletter in the first week usually on Monday or on Tuesday before our calls on Wednesday so we always send out this this newsletter be aware of that so subscribe if you didn't subscribe yet or disseminate it and and many things for your for your participation so all the links my colleague Andrea shared here so many things for your contribution and Paul many things for for being here with us and contributing to this this call so see you in one month we have some novelties for you to present the fair enabling assessment the fair assessment that we want to integrate in our validator some other some other functionalities for sure will be made available for sure the finally the multi-user access will be made available in May this is the promise that I have for you we have already promised to April but but for sure we'll have it which is important so for you to to have more than one manager of your data source assessing the provide dashboard so it's all and thank you for joining this call and so see you in one month and so you can check the recordings and the slides are available many things