 Well, good evening from me, Kaya. My name is Matias Lippis and I'm from the Australian Research Data Commons. Thank you very much for coming to our webinar today on the recent report written by more brains for the Australian Research Data Commons and Australian Access Federation. I would like to start this webinar by acknowledging the traditional owners of the land on which I am the Wajaka people of the Nongar Nation. And pay my respects to the elders past, present and emerging. I would also like to extend that respect to all of the First Nations people of Australia, and of course any First Nations people in this webinar. Without much further ado, I would actually like to hand over directly to Josh Brain, Josh Brain. Josh Brown, one of the brains of more brains. Over to you, Josh. Thanks Matias. Let me just share my screen. Okay. Well, good morning. Good afternoon. Good evening everyone. Thanks for joining us in whatever time zone you're in. It's brilliant to be able to talk to you all today. My name is Josh Brown. I'm a co-founder of more brains co-op and research and strategy lead. And I'm going to talk to you today about some of the process and findings of cost benefit analysis of the adoption of persistent identifies in the Australian research system. This is based on a study that we did previously in the UK as well, which is where we saw first developed methodology we used here. And we extended it a bit and I'll explain how that works as we go through the presentation and talk and tease, try and tease out some of the implications of these findings before we hand over to our fabulous panel for more discussion later. So, I mean the context of this research is that there is a huge amount of wasted time and money across the research ecosystem. There is a lot of a lot of effort, a lot of expertise being squandered on administrative tasks. Some estimates say it could be as much as 40% of a researcher's time is spent on that in straightened times. This is an unacceptable drain on funding for research. Now, our study showed that that kind of equates what we can demonstrate for just a limited set of the metadata that people are putting into their administrative systems comes to 38,000 person days a year with an opportunity cost of 24 million Australian dollars a year. And those are the headlines and that's some of the context. So I'd like to talk more about some of this as well because it's not just that expertise is being squandered it affects people's ability to understand the research ecosystem. So, the Jason Claire here the Minister for Education actually asked the Australian Research Council to investigate ways of making the national research assessment excellence and research for Australia, and also grant application process is much more efficient. Now we'll show in the case study later that they've made great strides in this direction already, but I think it kind of just helps to give the context that here for policymakers as well as researchers at every level in the research ecosystem. These costs are mounting up and it's really clouding everyone's ability to assess to deliver and assess research. Our focus for the for the project was on a set of five priority pits. These are listed here on the screen we've got DOIs for grants or kids for people, raids for projects, rules for organizations and DOIs for outputs including data articles and preprints. Now, all of these have open options. Now, one of the things I'll just say here is this openness the open metadata being available under a very permissive license means that that information that's associated with those kids is available. It's accessible and it's reusable, whether that's in a someone's proprietary research management system with their institution or within a kind of an open analytics database that serves the whole community that reuse is absolutely critical because that's where these benefits come in so I just want to emphasize the importance of openness to these solutions. In understanding the scale of activity, we really did concentrate on the number of entities those kids could actually identify in the Australian context. So we used kind of official numbers for the number of researchers getting to about 108,000 full time equivalent researchers active in the Australia University sector. So we looked at the number of researchers per publication and the amount of time it takes to type in basic factual information just real simple descriptive things like the title citation for an article and so on into into an electronic system and we used pre existing research for this rather than reinvent the wheel. So all of these citations on the slides we will share them afterwards are there if you want to check our check our workings as it were. So to quantify the number of grants and the number of publications. We've got. So, we've got the grant coming from a funder so one award. They reckon this kind of hog is around 6000 a year we got to this by combining data from digital science dimensions database with information from the Australian Research Council and medical research futures fund. And so that's kind of it oscillates a bit but it's roughly steady and then we have a steadily growing number of publications, which is up to about 180,000 a year for the last year in which we had reliable data. So to identify the number of researchers we have currently active as well with an orchid to understand the coverage. We went for the number of active records now we've defined an active record as one where someone has logged in or updated their record in the last year. So that's one of those orchid IDs that have a dot a you email suffix associated with them there are 122,000 at the time of writing in the report. And we analyze the number of projects as distinct from grants as someone has observed in the Q&A. So that's going to be roughly about 25,000. Now we did that by using the evidence we used from the UK cost benefit analysis to assess the number of projects in the UK and we scaled it according to OEC data about recent levels of research funding. We talked about the fact that information about grants and publications and so on is being manually entered into systems. And we recognize that one of the limits in the UK one was we had to assume that each piece of data was only once. So this is one where we took a step forward with the analysis and the method that methods that we use, and we actually got time to survey Australian institutions and find out how many of them are how much time they are repeatedly, you know, inputting the same data across the population. And what we found is that about half of this effort is being put in by researchers who are manually putting this in, and in the other half comes from administrators so it really is split between professional services and professional researchers here. But grants information is typically entered manually, manually entered into a system 3.25 times and publications just that basic citation information is typically entered 3.1 times. So previous research cited on previous slides estimates that project descriptive information it can be entered as many as six times. So those are the numbers we used for analysis. Now, just just to mention the UK study again we can actually we've just we've revisited that study and use these multiplier numbers there to arrive at new figures which I'll talk about a little bit later. I skipped one slide too many, but this is the core analysis. As we said this is how we got to the 38,000 days and the 24 million Australian dollars. So if we say there are 180,000 publications with four authors multiply that by 3.1 re keying events and evidence but you know based on work that just did in the UK. And that's about 6.73 it's quite precise for an estimate minutes of date of data entry time taken per citation, and that adds up dramatically as you can see. So following that sequence across all of these for grants, average of 10 minutes to enter that information. Same for projects. So six re keying events for projects and so on you get to these numbers of just under 38,000 and close to 24 million dollars a year. Now, these are, that's a significant opportunity cost, but I would just like to say that's the basic metadata. The key point here is actually there are a whole other kinds of benefit that can arrive here, because the presence of the pit in the system triggers that pool of metadata, which is vital and that's what saves researchers time and effort, but it could also automate processes. And once we have this kind of level of coverage, there's a whole level of insight analysis aggregation of information that becomes possible that enables better strategic planning enables institutions providing better support for their research portfolio. These kinds of benefits we haven't been able to quantify in the study so we just like to say this is this is these were that these are the numbers based on just the easiest to quantify. There's a lot more benefits to be addressed. And we also did an analysis about the kind of level of adoption that's required to deliver this now. The one of the key things here is nobody really benefits until everybody benefits it's a bit like a social network if you're most of your friends are not on there. Why would you interact with it. So these kind of network benefits are really important. The more organizations that are using PIDs and optimizing their workflows to pull in that metadata and reuse it. The more data actually becomes available because more people will be registering identifiers and recording that metadata, and the more the benefits build up. So we looked at this and we used the, we used the kind of lazy S curve you can see in the, in the top left. And we estimate that you know if you get to about 80% levels of coverage for the identifiers for these key entities that we talked about earlier. And you get 90% of benefits, which I think is kind of you know, it's realistic it's achievable. I think the Australian orchid consortium shows that in a five year period so we think it's a really important to have that ambitious goal. As I mentioned earlier we did some case studies to kind of humanize the story and put it put some of the context in as well. So the first one we talked about with the Australian Research Council. They have an integration in their research application system or their funding application system called RMS that pulls information from orchid records and pulls information from the Crossref API to populate publication information. And you can kind of see how that process works in a schematic on the top of this slide. But really, I think you know they said 78% of the publication data that was has been submitted since late 2018 when the system went live has come by orchid, and just those citations have saved the equivalent of $850,000 Australian research time. And in, you know, but this is not always evenly distributed of course. So one example from Joe Schapta who wrote a piece for the Australian Access Federation blog on this said it saved that that new integration saved him personally three to four days of effort, her grant application, which is, as he says that absolutely staggering. And we also spoke to researchers at the Terrestrial Ecosystem Research Network, which is a really complex, very demanding, very highly collaborative research research institution with a real range of sensors instruments different kinds of equipment, which is students projects running simultaneously. And in all of that complexity the pins they already use are delivering significant benefits. And what they're hoping to do is extend their coverage of peers, bring in more data site DOIs bring in international generic sample numbers and so on, so that they can actually have what they're calling a ground truth for the data sets for categories and samples that they use. This is really important in such a complex and diverse research environment as well so it really to the way that the, you know, identifiers and structured metadata and the exchange of these things across systems can actually bring some clarity to a really fast moving dynamic and complicated research process. We also looked at some of the case studies provided by Australian Research Data Commons and AAF Australian Access Federation, who are providing research, research identifier services or access to those services to the Australian research sector. The ARDC as you can see here of our providing a number of those services, AAF is focused on leading the National Orchid Consortium. But just that orchid consortium alone, just in terms of reducing the cost of orchid membership and lowering the cost of integrations has saved 24.6 million over five years of its lifespan. And that's before we get on to catapulating the benefits that pulling that data from orchid records is brought. So I mean really significant benefits and actually this centralized approach that has taken has been taken in Australia has really helped to drive down the costs of pit integration and adoption, and that of course maximizes the available benefits which needs to be upset against those costs. So in summary, you know just to repeat that number that that staggering number of 38,000 person days wasted on data entry a year people you know trained researchers trained professional supporting researchers who could be using that expertise to do something much more valuable with their time. So we also kind of made some recommendations for a strategy in Australia to help to get to that 80% target. So things you know we need to build on the leadership that Australians already benefit from in this space with AAF and ARDC funders need to follow the example of the Australian Research Council and start automating a lot of this kind of the data processes that they, they have, and there needs to be a whole sector approach. This needs to be something that supports small institutions, maybe smaller institutions with a large institutions with a very small research footprint. We don't have the kind of investment in support services or, you know, software platforms to provide data about research it needs to be inclusive, and it needs to be comprehensive in order to get to that 80% target. And that's my last slide. If you, if you want to read more about the study getting to the detail of our work. The DIY that takes you straight to the record. If you want to if you have any questions you want to stay in touch with me that's my email address. And we've doing a lot of other work around identifies looking at the benefits, analyzing the workflows ways that his can be helpful metadata exchange can deliver the benefits. So if you take a look at the website for more information about that. Now I will hand right over to Tash, I think he's going to talk to us about the way that some of these, some of the findings have been received across Australia so Thank you, Josh. Thanks so much. I don't have any slides but I'm just going to talk for a few minutes about the response to that report in Australia so the first of all those figures as Josh mentioned are staggering and have gained quite a lot of attention from the research environment. We've socialized the report fairly widely so it has gone to the deputy vice chancellor's office search they have a committee through universities Australia that they meet and we presented to that group. We have also done various presentations to our main funders the Australian Research Council and the National Health and Medical Research Council in Australia and various other presentations so I think really there's three main responses I'm going to The first one is wow those figures are really impressive and people can immediately see how those figures were arrived at they can see the wastage of re keying something multiple times by multiple people in multiple systems and the way that fits can help with that so I think the arguments in the report are very clear people understand that and that's really you know there's no kind of arguments around that people like well yep you can see that that's very beneficial. The second response is around the alternative to the excellence in research in Australia exercise which is our national research assessment exercises Josh mentioned the Minister for Education in our government has asked to look at has asked the Australian Research Council to look at alternatives to the current era exercise and kids are of course part of looking at that so the ARC case study is you know there's a lot of evidence there that what they've done integrating orchid into the RMS system has achieved massive gains for researchers and for the ARC in terms of I mean that quote from Professor Joe Schachter that he saves three to four days per grant application because of that integration that's absolutely massive especially if you think about not just the successful grants but also the grants that weren't successful but that people spend a lot of time on so there's a lot of gains to be had there and then that is demonstrated so now they're thinking well if we've used orchid for that what other identifiers could be used to make similar grants to use it in a reporting exercise. So, so those things are the top of people's morons. And the other third response is around looking at the gains we have made so far and the infrastructure we have so we don't have a formal national aid approach. So we have the Australian Orchid Consortium which, as Josh mentioned has say 4.5 million. What was that figure again for the Australian Orchid Consortium it's middle of the night here in my brain's gone but it's a 4.6. 4.6 million for, yeah for Australian research organizations to be part of that. And it's not just the savings you know we're talking. This is a paid cost benefit. So the focus is on the costs but there are of course significant benefits from the Australian PIDs that are to do with accuracy in reporting and you know the persistent identification and research outputs and objects and linking those to researchers that are all beneficial in a system to make it more trustworthy and to be able to track impact and site confidently and things like that. But this report's mostly focused on that cost benefit side. So the next steps for us I think are to move from national approach international PID strategy and how do we do that. So the first kind of step that we'd like to make is to to leverage the interest that we currently have in the report to have a national conversation around this and what might be a useful strategy for Australia so we need to get together a steering group of people who are probably at a more senior level who are able to take this forward strategically and probably spawn some working groups from that as well to look at particular aspects of the strategy to move it forward. It's sort of similar to the way us that we built the Australian Orchid Consortium which was you know to get up that demonstrator model look to look at the problem we start with the problem what is the common problem that we all have and our PIDs going to be helpful in solving that. And then what kind of model would be helpful here and float that as a as a as a document that people can comment on you know have an iterative approach to that and then have an action plan or a sort of roadmap to take that forward. So we are fortunate that we have the National Infrastructure in ARDC to help take that forward as well and in collaboration with our colleagues at Australian Access Federation. So also the we have also contracted Linda O'Brien as our consultant for National PIDs strategy. So Linda is on the call tonight and she is also the chair of the International Orchid Board and has been with the Orchid Consortium she helped set it up right at the beginning in Australia. So a really fantastic person to have on board to help lead that discussion forward in Australia. And the last part I wanted to reflect on was some of the potential decisions we might have to make around a National PID strategy. So some of them might be around scope. So, you know, are we going to focus on the cost benefits, for example, and therefore, you know, are those five priority PIDs the ones we want to focus on. Because there are challenges in some of those PIDs in terms of current adoption levels around RAID, for example, or around more you know how do we take these things forward. We might have a sort of we'd like to have kind of a five year approach, which sounds like a long way off but that's what we actually did for Orchid and we achieved that. So I think we need to have the grand vision here is also a question around do we focus on the sort of saving side or the impact side or a bit of both, because for example, assigning PIDs for research instruments is a big discussion in Australia, we have an instruments community of practice. There's a lot of interest in that. And that's less about the cost benefit because not many people are assigning PIDs, and therefore there's not the metadata to reuse around that. But it's more about tracking impact. It's that big research facilities that are funded by the government in particular want to track the usage of those facilities. So should that be part of a national PID strategy. And if so, how do we do that and how do we see the gains from that. And you know where is that in terms of scope where does research start and government stop or where is the what's the role of government data and this because government agencies don't tend to see themselves as producing research, however, the things that they produce are used in research. And where does an orchard apply, for example, in a government setting, where does a role apply in a government setting, what about for organizations that are transitory that are sort of funded for a few years and then go away does that fit in the role framework you know, where does RAID as a research project identify a fit in government who have different kind of projects that are not around research and things like that. So we've got quite a few questions that I think will come up, but I will leave it there and I will hand over to Alice to open up. Thanks Natasha and hi everyone I apologize I'm going to mostly have my video off because my internet has been a little iffy and it's probably more stable that way. So thank you to see everyone. Thank you Josh and Natasha and Matias for kicking us off. We're now going to get into sort of more of a discussion parts of the session and I'm really delighted that we have representatives here from all fives as a priority PID organizations that were mentioned earlier. I'm not going to introduce you in great detail if you don't mind. You can add more about your context but we have Chris Shillerman map base from Auckland data site respectively who are representing the executive director position. We have product managers Maria Gold and Sean Ross from raw and ARDC and then director partnerships from Crossref Jennifer Kemp. I want to say a special thank you to Maria and Sean Natasha and Matias who are at the extreme ends of the time zone situation here Maria is in California and Sean Natasha and Matias are in Australia and you're all heroes. So without further ado, we are going to do this we're going to do sort of each each speaker has is going to take an initial stab at one of the questions we've come up with and then we'll open it up to see if anybody else has anything to add to it. I really love your questions and your comments as well so please do put them in the chat or Q&A will do our best to answer them one way or another before the end of the webinar but if anything is unanswered we will get back to you afterwards with an answer. So without further ado, Matt is up first. So, what's your sort of immediate thoughts on the report and what sort of stands out most to you both from a kind of data site perspective and more broadly if you want to. Well, as you mentioned it would be really great to hear from some of the attendees in the chat and also others following on some of their thoughts. I think initially or the sort of first impression that I get is how do we make these figures a reality. And so, you know, I think we all look at these and go this is great like this is what we talk about and this is, you know the promise that we make into the community. So how do we make sure that that's a reality a lived experience for everyone in the community. It definitely resonates quite closely with the approach that we have that data site and many of the other. I know, you know our partners around the table today, and have an approach where we look at building communities of practice and coming together across stakeholders within with a common, common goal and common effort. And so I think the report speaks to that and it starts to focus on how do we translate that national pit strategy or how do we translate that national focus around persistent identifies an open infrastructure into tangible benefits and those benefits being that reuse automation aggregation of information for downstream use cases. I think it also definitely emphasizes the need for open infrastructures to work together and so everyone, you know, is sitting on the room today we work very closely with and it's really important because that helps us demonstrate the power of that interoperability and coordination. A lot of these benefits really only come from that interoperability and coordination, but also noticing that this is not just a technical solution it's really really important that we focus our collective efforts around what I'll frame as technology and engagement. It's one thing to just make it technically possible but it's really important that we understand the work that researchers are doing and their workflows and the key touch points that they have in their day to day. And I know this from some of the projects that we running like the fair workflows project where we really before any pre registration was done we worked with the research group to identify. What are the workflows one of the steps that you go through that we can support you across the ecosystem so all of the priority persistent identifies that are included in the report how do we bring those all together and to really actually make sure that this is a lived experience for that research group, and then also noting that this is not just about doing it in a pilot environment it's really demonstrating this at scale across, across domains across board is how do we make sure that that really does scale to bring the benefits and then make sure that these figures are reality so those I guess and yeah one of the key and yeah some of the key things that stood out to me interested here at this thoughts and comments around that. Thanks Matt and yes lots of good points and I'm guessing that probably everybody around the virtual table would agree with with what you said does anybody have anything additional to add to that at all. Not it's actually I think quite a good segue into our next question, which is, because I think this is going to be in many ways critical to achieving, achieving the reality that you that we all want, both in Australia and actually in other parts of the world where similar conversations are happening which is how do all these priority pit organizations currently work together. You know whether it's in terms of messaging or technical developments or whatever. And do you see this changing at all in future or how how what what what can you can give us a sort of sense of how you see that shaping up as some in I guess particularly in a response to this report but also just in general, Maria I'll throw that to you first please. Yeah, thanks Alice and thanks everyone for being here today. So I'm here representing Roar and also representing my primary solution at the California Digital Library which is one of the organizations that operates Roar and just to pick up a little bit about you know on what Matt was just talking about I was, you know, struck by this notion of their report really highlighting the, the, the value and power of, of this framework of thinking about how all of these various kids can work together and achieve transformative results but also the, the challenges of treating them all as a group because they are quite, you know, there are some important distinctions between them and Natasha was kind of picking up on that a little bit too, you know, Roar, for example, is not itself an organization or standalone service in the way that Davis item cross trust and work at all represent membership organizations I think there's a lot of power and thinking about how we can connect the dots of these kids but also have to understand nuances between them and I wanted to answer this question and in particular because I think Roar is, is really emblematic of what we can achieve with, with the kind of collective action and collaboration across the community that that the report touches on so Roar is an open registry of the organization that ratifiers for research organizations. It was developed through a collaboration by 17 different organizations and it's currently being led by three organizations, California Digital Library cross trust and data site and that's a deliberate choice, you know, in other words the end goal is not for Roar to ultimately become an independent organization but to continue running it as a collaborative effort because Roar really depends on on wide adoption in cross trust metadata in data site metadata. It depends on organizations and across academia like California Digital Library advocating for adoption and use of open identifiers so Roar is really a strategic part of what our three organizations are doing and by extension Roar also enriches the strategic goals that our three governing organizations have around open metadata and interoperability and community leadership and community investment in infrastructure so I think Roar really emblematic of how organizations can work together to support kind of open metadata infrastructure that we're talking about in this report, and that's one of the unique values that Roar provides. And the other thing that I wanted to to mention is Roar is one of the newer one of the newer kids on the block so to speak, and I came into a context in which there was already a really strong framework and fabric for collaboration that really helped Roar got off the ground I know that data site and cross trust and work it already had really strong working relationships. Cross trust was very involved in launching orchid and Roar has really benefited from the strength of those existing collaborations both to build networks and also to help drive adoption. I don't personally see a lot changing substantially in terms of how our organizations already work together but you know, speaking from the standpoint of Roar really trying to insert ourselves in that existing framework and really trying to leverage our collective knowledge and expertise and communities to help drive change. Thank you and yes totally agree Roar is an excellent example of how all these organizations have come together and are already working together. I know we also have quite a lot of people from these organizations on the call so you know if you have examples of how you're all working together please do also share them in the chat I think people will be very interested. But Chris, Matt, Sean, Jennifer if there's anything else you would like to add specifically about things you think that are ways that ways that you're working together that people would be interested in. Now is your chance. I guess I could say it's just a word or two about that. Since I've just come a couple of weeks ago from meetings with several people here in in Europe raid is sort of the new kid on the block as far as these as far as kids go. I'm definitely benefiting from the experience of the longer established kids. They've gotten an enormous amount of help with, you know, everything from metadata to sustainability plans and everything in between. And then in a more concrete sense, technical integration that we're we're looking at senses I'll talk about in a minute. The rate is to a large extent and a container for other kids working out we've started already discussing how we're going to do that integration to ensure data quality that the that something that purports to be an orchid or a do I actually is, and is the one that you're looking for. Thank you, Sean. And again, a perfect segue into the question the next question which is for you as you note. And actually it's come up in the chat as well so I'm glad that we have you here to help answer it so I think, you know, it's easy to see why grants and people and grants are important in the PID cost benefit analysis but projects are new as an entity for having a persistent identifier. So could you tell us a little bit more about why they're so important. I mean, I think you know those of us that have worked on or read the reports can absolutely see it and in fact I've seen a lot of enthusiasm heard a lot of enthusiasm but it's not necessarily super obvious until you kind of realize what they're trying to do so if you could explain a bit about that that would be great. Yeah, sure. So I guess the way that I like to explain this is that we may not all agree on what the definition of a project or research activity is, but at least colloquially as as researchers and as research administrators, we talk about we talk in terms of projects all the time. And I mean, I'm also a researcher in history and archaeology and even in has disciplines where there are a lot of single researchers who aren't necessarily working in large collaborations, but they still talk in terms of what project is going on now. So, I think there'll be a bit of organic definition of what a project is. But as far as rate is concerned we're really looking at it as the envelope or container for all of the inputs outputs organizations contributors. It's a kid that ties these other things together, and I'll just give sort of three quick sort of examples of how this can can help. And I think Natasha mentioned this to that even in these numbers you've got when we talk about the cost benefit analysis we if you think about all of the unsuccessful grants and if you think about the applications I mean. And if you think about the life cycle of research project at the, at the, at the front end or the direct sort of savings from efficiency gains, you know, a group of researchers can come together and come up with an idea, get some internal funding from their organizations and considering the, you know, a sub 20% success rates for grants like the Australian Research Council grant flagship Australian Research Council grants. They may apply for any number of grants before they're successful. A raid offers a source of truth a place for the information about those projects to be stored that could then be, you know, referenced for the across the half dozen organizations and, you know, 10 or 12 researchers who might be involved in a project to make sure that it's consistent, that it doesn't have to be re entered as you know we move from one grant application to another. Sometimes there's a little bit of and I think this came up in the, in the chat and in the, in the q amp a little bit of I wouldn't call it confusion but a question about what's the relationship between a project and a grant and I think I like something that Natasha said a grant is something you get and a project is something that you do. And as a, as a humanities and social sciences scholar I can say that there's a very loose relationship between grants and projects many projects function without a grant and and other projects have multiple grants. So it's there, it's not a one to one thing and, and, and a project ID I think can capture that bigger picture and just very quickly then in reporting and I saw this recently when I went back and looked at a record tied to a grant ID for an old grant that I'd won some years that said, oh, this, this grant produced three outputs. Whereas now that grant was essentially a seed grant that that that launched a project that's been going on for 12 or 15 years now and has produced probably a dozen outputs, and has, you know, now mapped, you know, a big swath of heritage in Eastern Europe and had a lot of other impacts that you could capture with a project ID that individual that none of the individual grants on their own would have would have captured and I think, you know, beyond even just the funding aspect of it. But if you're looking at any of the inputs that organizations the people the instruments anything else it can really give you the bigger picture of what outputs in the long term are associated with that. And finally, I'd say beyond just the strictly strictly sort of efficiency or and and reporting improvements that you might get out of a raid. I think that capturing the the history of a project and that's something we're building into a raid is the ability for the raid to evolve over as long as people come and go organizations Lee enter and leave publications come out etc. It really gives you an important piece of metadata or para data about the research outputs the data sets the publications the software that can come out of a project and an open research perspective, I think it's important as well. So, I think I'll leave it there I ramble when I get there. That's great Sean thank you so much. It seems to me that the one of one of the wonderful things about raid is this really kind of all embracing this, if you can register a raid at the very beginning of a project, you get all the benefits from being able to bring all the different types of data in at the start but it can also go and living way beyond the end of a project so you can add all the, all the, as you say sort of historic information as well so pretty pretty amazing thing. And Matt has already mentioned in the chat how data sites are going to be starting to use raid. I don't know if anybody else would like to chip in about how your organizations are planning or thinking about using raid going forward. But to the point about how we all work together at, we at Crossref already collect project information in grant records and I'll talk about this a little bit later but so obviously we're interested in what rate is doing and part of the project advisory board so we're definitely keeping an eye on what develops with that. There's an interesting comment in the chat from Laurie about some how how in the US tend to equate project with grant which as you say Laurie does not acknowledge multiple funding sources which particularly, I think one or maybe not particularly but certainly in some of the humanities type projects is is is, you know, they tend to get lots of little grants which are very hard often to keep track of in outputs and things like that so I have heard a lot of enthusiasm from that community about raid which is great. At the first time I'm going to move us on Jennifer you're actually up next. And I think you're the perfect person to answer this question with your director of partnerships hat on so you know it. It takes a village doesn't it we've been talking a little bit about how your organizations can work together. Who else needs to be involved in order to realize the benefits outlined in the reports and again I think you know this is obviously partly a question specifically for Australia and Natasha you talked a little bit about this in your comments but there's also obviously a global elements being national national strategies are great and we're delighted that that that one is being developed but PIDs are global so it is also a sort of global initiative as well at that some level so. Jennifer what what what do you see as the sort of the the core constituents who need to be involved to make sure that we are successful. Well I think I mean I think the report covered a lot of this very nicely but there are a couple of things I want to call out and for me. From my point of view I think some of it is definitely a question of scale so we've talked a lot about funders and funding information. And registering grants I think is a really big piece of this. So funders can register grants with cross draft currently and to an earlier question in the chat and some of the notes in the chat about definitions of what can be included how information and how how PIDs are related. Just I'll put a link in the chat. So this gives kind of an overview of what is collected in grant information and so you'll see some of how. Funding is defined in general some of the project information that we collect so you know orchids are part of the information that is collected in grant records. So there's a lot of ways to tie these together already. And we need more of it basically so so getting more funders involved, getting more of those grants registered, even sort of older grants are so often important to the related research outputs because as noted here. It takes time to complete these projects and to publish the output so so I think that's that's a very big piece of it and the other thing that I encounter all the time one of the groups I work with across our service providers so third parties that some of the vendor systems that were mentioned in the report. And that comes up so frequently is the manuscript submission systems because that is the initial point of capture for so much of this information. And if the so those systems are certainly involved, but as these things develop as new identifiers emerge and as the data that surrounds them evolves because that's really a lot of the key to it to me. So in the report as well the metadata reuse it is the metadata that's associated with the kids that is so so important to all of this. Those systems need to accommodate those new fields so if there is not a field in one of those systems to collect a grant DOI or rate or whatever the case may be, then it really interrupts kind of the metadata supply chain so I think we've got the right communities involved for the most part but but maybe the participation needs to kind of scale up a little bit or just adapt as things move along a little bit more quickly. Any other comments are there any sort of groups or organizations that people feel really should be brought in who perhaps aren't participating as much as they should be at the moment. But the one other thing I might add is just libraries in general so you know the rim and Chris systems have come up in the report of course but there's so much really rich information out there and you know in the sort of traditional library systems mark records and things like that just don't accommodate a lot of this information and a lot of cases which I think is is a little bit unfortunate and I hope that that changes bit over time. I mean I think from my perspective one of the possible challenges is, and you slightly alluded to that just then Jennifer is getting the service providers and vendors on board because none of this is going to work if we don't have systems and interoperability people if the people building systems don't actually build pids into them and I think you know there's clearly some great success stories there but I think we've still got quite a long way to go so finding ways to bring those people into the conversations into the conversation and really make sure that they're floating thumbs up coming from somebody. Making sure that they're brought into the conversation and really brought into the idea of making this a reality seems to me to be a really important thing as well so perhaps something to think about going forward. Sorry, we are. We knew this was going to be a good interesting long discussion so I'm going to move us on again and Chris, as you know you are up last please. And I think this is a nice sort of wrap up that will segue into perhaps Josh is going to just very summarise very quickly right at the end but what do you see as the concrete next steps on this how can we actually make this a reality. What will that look like both at all kid answered more generally. Sure. Well hi everybody. Good morning good afternoon good evening wherever you are. Nice to be here. You know I think picking up on something that was clearly stated in the report and Josh mentioned it in his talk earlier. The question really is, you know, given that this is a collective action problem. So we move it forward, and it's pretty hard to move forward on all fronts at once I think. So you know I think that one of the important things to think about here and that's the thing we're doing at all kid, but I'd also advise anybody in their own community to think about this is, what's the thing to do next, not, you know, not trying to do everything at once. So the things to do next are the things that are going to have the most impact for the most people. So I think a really good example from one of the case studies in the report is the integration of orchid into the into the AC grant application process, because I'm assuming Australia. This is a system that's used by lots of people so it's broadly applicable. And those kinds of processes tend to be quite high stakes right people are highly motivated in getting grants, as they are with funding. So, having a really good integration with kids in that system is going to drive impact, and it's also then serving as a case study to see, look at the benefits that can be achieved from this kind of integration. So in the case of orchid one of the things we're looking at and this except on something you said earlier Jennifer as well, is that we're really looking at the service providers that have orchid integrated into their systems will be, we have a certified service provider program that will be launching next relaunching next year. And we really want to encourage and motivate service providers to have better deeper integrations with orchid, because you know if we get a better integration in a service provider that's used by 1000 institutions. That's going to have much more impact than one integration at one institution. So I'd encourage everybody in their community to think about what are the most impactful processes workflows that we can really focus our attention on, because I think you get further by making those important steps and trying to take thousands of tiny steps. So I think the other aspect of that is incentivization and figuring out what are the incentives for every actor in the system to move in the direction of adopting pits we live in a fairly complex multi stakeholder world we've already talked about researchers and institutions and funders, their vendors their publishers, everybody's got to get something out of this because everybody has their own priorities, and everybody's going to be incentivized to participate. So I thought another, you know, referencing that that case study in the in the reports again of the RC implementation. There's huge incentive for researchers to participate and use that integration, because they save a bunch of time and hassle, right. And they also then are delighted by it would seem from the anecdote from Professor chapter that it's unbelievable this this this integration has made his life better or seems unbelievable it seems like magic. And I think we want to kind of create those incentives for people to actually participate because researchers let's face it they want to do research, right. They don't want to do administration and anything that kind of gets out of their way, and makes it easier for them is going to be positively received. So I often see, you know, in our case, Chris you just put yourself on mute. What happened. I often see orchid integrations which are just collecting orchids and doing nothing with them, right. And that's just an extra step for researchers that doesn't help anybody. So if you're going to do an integration, think about how you're going to use whatever pit it is, and how you actually going to give some benefits back to the person you're asking to do something, because again that will create kind of positive momentum. So automation is a big part there defaults for a big part there so last thing I'll say is I encourage everybody to think about minimizing button presses. Or could we're all about research and control right we want to give everybody the control that how their data is used, but increasingly, we want to make sure that the thing that most people probably want. What is most useful for most people happens by default, and then they can go and change the settings that they want to, to make something different happen. And again I see a lot of integrations that still have a lot of button pressing involved. You know if you're going to, if you're going to have the possibility of automating a workflow, turn it on by default and let people turn it off if they want to write. And then press an extra button to turn it on because it's a good chance they'll never find that button. They won't know what it is until they see the benefit of it. So, let me stop there, but I'll say it's kind of sequencing, figuring out what you do first, what you do next impact making sure there's something in it for everybody who you're asking to do something. And automation, so that people have to do less to get what you'd like them to get out of the system. That's Chris I'm pretty sure cheers all around for all those, all those suggestions. We are about to wrap up does anybody have anything to add into what Chris said before I hand over to Josh to wrap things up for us. Thank you very much, Josh. And thank you very much everyone this has been a really really interesting conversation I appreciate it. Thanks Alice, and yet thank you to everyone on the panel for your thoughts it's been a really insightful, and I do say slightly inspirational conversation I think there's a lot coming out of this, I mean, in our in our in our work we focused on the in institutions, and we focused on the time that could be saved by metadata automated metadata reuse. This, as I said in my presentation, it skips over a lot of potential benefits here. And actually I think what we need to think about is kids as a bridge between these systems whether they're open or closed whether they're institutional or publishing where they wherever you are in the research lifecycle, these connections between kids we talked about how you could attach a rave to a to a grant identifier. But what if as Ray picks up adoption, it was a but you were able to auto update the grant ID as an investigator is hired six months into the lifecycle of a project that could go back to the grant ID. And when somebody signs in with their orchid in their publishing process to give their orchid the public. And the publisher system looks in our orchid record pulls up that grant ID and says oh you've worked with these people. And is this the correct grant and then it automates the grant acknowledgement and it picks up the raw from their orchid record and automatically populates their affiliation and saves them time. And within the publisher, they're able to look at the raid and the grant and their previous publications and identify potential conflicts of interest and streamline the process of organizing peer review. There's a realm of ways that working across communities. And I think this speaks to Chris is coming about aligning incentives. There are costs at every stage, and sometimes if a publisher doesn't pick up a piece of institution, a piece of data. It's an institution, or a funder who has to pay the price of replicating that information, or filling that gap, but often that the real burden of that ends up falling on the researchers. And I think one of the things we've really covered today is that if we're all working together, there's a lot we could do to bring down the cost and the complexity of integration, but there's also a lot we can do to really accelerate the value and deliver the value of these values a long way beyond time or money savings. And I think if we can keep keep it on a shared focus on that shared goal. And as you know, be pragmatic, look at improving our own systems, look at building up adoption, but also look at those key integrations where that value is going to be delivered and think about how we incentivize other communities who are researchers depend on to actually develop pids implement pids and do it well and do it effectively and do it soon. I think we've got a real, a real case to be made here and I look forward to seeing how our colleagues on the call from various pit providers in the community representatives joining us today, work together to drive this forward nationally and internationally, as well as in the lab or in the library or the archive.