 All right. Thanks everybody. I'm going to go ahead and start. My name is Greg Madden. I'm the CIO at UCAR in NCAR. So NCAR is the National Center for Atmospheric Research. UCAR is the University Corporation for Atmospheric Research, which operates NCAR. With me today are Jennifer Phillips, our Library Director and Matt Mayernick, the Assistant or Deputy Assistant Director, whatever that is. And you'll notice that we put our ORCID IDs on the slide just because we felt like, you know, you really ought to do that. So I'm going to give you a little bit of organizational background about UCAR and NCAR and then our immediate motivation for what we're doing over the next five and a half months. And then Matt and Jennifer will come in and give you all the interesting pedagogical stuff that's associated with this. So NCAR and UCAR. NCAR is the Federally Funded Research and Development Center. It's the National Center for Atmospheric Research up in Boulder. So I think all three of us just drove down today. UCAR is an operating entity and basically the only thing it operates is NCAR. So these two organizations are tied at the hip, have been for 60 years. We have a cooperative agreement with NSF. And so that's how that works. We do have about 120 slightly more member colleges and universities or a .edu for our email addresses. As it turns out, we're not particularly educational. We're not really government. We're not really a bunch of things. We're similar to a lot. We're a little bit like a tiny bit of your university administration coupled with your VPR's office and some institutes. So that's probably the easiest way to look at it. We have a lot of facilities. We've got a super computing center up in Cheyenne, Wyoming. We do community coordination and we do obviously atmospheric research. So our immediate motivation, so Matt's going to talk more about the sort of motivations around PIDs and ORCIDS and that sort of thing. But our immediate motivation is that we've got a UCAR President's Strategic Initiative Fund grant to improve the adoption of ORCIDS across NCAR and UCAR. Also to complete some initial integrations of ORCIDS into our systems to help with NCAR's research impacts initiative. So right now when the NSF asks us, you know, show your work, it's very difficult. So it takes a long time and a lot of effort to pull everything together and show where their money is going. So we have a research impacts initiative to improve that. And then really we're trying to sort of drive towards what I call a virtuous cycle of integrations. So this is the slide for this. The nice thing about being the CIO is I can be wildly overly ambitious and then back it off when it turns out to be unrealistic. So these are some of the integrations we have either in progress or planned for ORCIDS. So on the left hand side first is sort of our research products. Open Sky is our internal database of organizationally affiliated research output. So we have ORCIDS in there already. We are going to use ORCIDS as our authentication system for a variety of our data products so that we know who's accessing our data. So we've got the Geoscience Data Exchange, our Research Data Archive, Climate Data Gateway, those three different data products and we're trying to make sure we know who's using those. Our EOL Field Data Archive, EOL is the Earth Observing Lab and we're including ORCIDS in our metadata records there. Also in our metadata records for the Dash Search, what does Dash stand for again? It's a digital asset services hub. It's our consolidated search for all products across the organization. Yeah, thank you. Then more on the IT side, and this is like I'm enterprise IT, so this is where I kind of come in here, is we're trying to get this spread throughout all of our enterprise systems so that from that sort of that operational level, again, UCAR is the operating entity, that we can really be confident we can meet NSF's cooperative agreement and really tell it where its money is going, where all of our funder's money is going. NSF's not our only funder. So we really want to be able to really show the impact of those funding. So some of the things we're going to do is get the ORCIDS into our MUL platform, which is just an integrations platform that makes it sort of easier to integrate across multiple systems. We're going to get it into our research administration system. We have quality research, so we want to make sure it's in there so that we understand who's applying for grants based on their ORCID and then we can report out based on that. We're going to get it into our organizational financial system. We have workday financials, or, well, we will in about a year. Something all goes well. And so we want to integrate from that research administration system into our financial system so we get better financial reporting on all of the grants. We're going to get it into our HR system so that we can tie our researchers to their ORCIDS internally within what we're doing for HR, get it into our research information management system. We're about three months away from, well, probably less than that from having it selected. We don't actually have a tool right now. That's why it takes us so long to do our research reporting right now. And then we're going to also get it tied into our identity governance and directory services. So really, we're trying to just spread the ORCID across everywhere. It can be spread so that we can tie all of our information better together and just get better reporting much faster. So the problem is now when we get asked to do these things like report on where the funding is going, it can take weeks of a bunch of people to pull all that data together from multiple different sources. We're really trying to get that down to hours, go into a system, click a button, all the graphs and money comes up and you're done. So we're really trying to go from weeks of multiple people to hours of a couple of people so that we can really improve our reporting there. So that's what I've got and now over to Matt. All right, thanks, Greg. So the theme of the talk, kind of the rest of it, and Greg kind of kicked us off is trying to move from the idea of assigning persistent identifiers to doing things with them. And so if you think about the slides Greg already showed, on the first slide, our ORCIDs were there. That's sort of a label, right? That doesn't do anything for you if that's all you have. But if we can use ORCIDs in the sense that Greg said to connect these systems, then we can do something with them. So that's kind of the theme and I have a few slides on each of those themes and we'll talk about a few cases. So I just pulled this up to kind of emphasize that point that first of all, the step back, the persistent identifiers we're talking about here are not, again, sort of something you might have in an internal HR system, but it's the idea of an identifier that can be used to persistently identify something on the web. And as the second part of this quote says, there's some actionable aspect to it, right? So it resolves to a page where that asset is, if it's a publication or a data set, or in the case of a person who's, you know, not a digital entity, there's information about that person. So that's the kind of core idea of the persistent identifier versus some generic identifier that you might have in any system is that it's web-based and there's some actionable aspects to it. And so that's what we want to get to in this talk and kind of interested in hearing your thoughts as well on what we might use with identifiers or what you've done with identifiers. So I'm going to talk a little bit about the assigning part first just to kind of set the stage and the landscape. So we've been assigning DOIs through the data site organization since 2012, and we have also been assigning ARCs, which are archival resource keys and other type of persistent identifier since about the same time. And those are to service supported by the California Digital Library. And you can see the sort of breakdown of asset types here just to give you an idea of what we're using these for. So DOIs, we're really using those for, you know, sort of sightable objects kind of to create a persistent citation for data sets, texts of various kinds, software. And to folks who were in the previous session, 41 is a dramatic undercount of the amount of software produced by organization, but if you were in the last session, there's sometimes challenges in getting a good grasp on that topic. The reason we use ARCs in addition to DOIs is that in our open access repository, which is called OpenSky that Greg mentioned earlier, many of the things we collect in there are published articles, that's the top item there, that themselves have DOIs created by the publishers. So we didn't want to assign a second DOI to sort of confuse that situation. So we use ARCs as a different type of identifier for our own resolution of our own persistent objects. And then, you know, various other types of assets going on down. I'll give a quick sort of promotional shout. If you want pictures of clouds, we have lots of pictures of clouds in our OpenSky repository. So please take a look at that. Call out one more example here, since we're just getting this project kicked off and I know Martin's right here, he's funded our project. So we have a new research coordination network funded by the FAIR OS project that's focused on persistent identifiers for facilities and instruments. And I think I showed briefly on the last slide physical object there on the left, the fourth one. So we have assigned about 25 DOIs to things like she's shown here. This is a set of integrated surface flux measurement towers and tools that are one of our NCAR Labs provides. And we've also assigned a DOI to some of our aircraft and in line with what Greg said earlier, we're trying to use these DOIs to facilitate persistent kind of citation to them or, you know, metrics of usage, attribution of usage, things like that. So if you're interested in this topic, please feel free to come and talk with me. I'm interested to talk more about this. We're just getting this project kicked off now and we're digging in a lot more in the next few years. So if you're interested in facilities and instruments, even kind of campus level facilities, we have colleagues at the University of Colorado in Florida State also in this project. So we're happy to talk to you about that. There's also externally generated identifiers and this is a couple of examples that are more prominent for us. Greg already mentioned orchids. Who has orchids within our organization and how kind of full their profiles are. These are the numbers so far and our organization is a whole of about 1,200 people, about half of that is research staff. So we think we're actually doing pretty good in terms of the numbers. We're just kind of still trying to suss out exactly what those are. We're also interested in the research organization registry IDs or ROARS. We know UCAR and NCAR both have ROARS. Some of our subunits have ROARS. We're not quite sure how they get created because some of them don't. So organizational identification can be somewhat tricky. What is somebody's affiliation? Are they UCAR or the NCAR? This is an internal debate we have far too often. But these are things that we sort of have less control over. Put it that way. We do not create these identifiers ourselves. So that's the landscape of what's been assigned or what's been created and now we're going to talk about using persistent identifiers and again that's the kind of theme we're trying to get to is just assigning them to using them. And again I like this to sort of emphasize that point that the benefit from the use of persistent identifiers there needs to be clear case about the benefits of taking time to actually use them for scholars to use them. So that's what we're trying to do with a couple of these cases in addition to the ORCID project Greg mentioned is doing things with them to incentivize people to create them and to use them more consistently. And this is kind of the theme. This is just sort of a cartoon. There's nothing kind of logical about this but the idea is that there's lots of persistent identifiers for lots of things. These things are often related. The people create the data. The people use the tools. The people create the services or use the services. Publications derive from the data. Use the software. Use the instruments. And we think there's a lot of potential to using these identifiers and connecting them more. So that's going to be a little bit of what we talk about in later slides. And there's a lot of interest externally from data site which provides DOIs and other organizations to creating a PID graph, quote-unquote, which would kind of connect these in a more formal sense. So the two cases we're going to talk through for the rest of the talk is linking scientific papers to underlying data, software, and other resources to enable discovery and gathering impact metrics to assess impact and identify contributions to scientific work. So in the first case, linking scientific papers. This was a long-running project for us. Can we collect and display linkages between specifically papers and data sets, but then it extended to other kinds of resources such as software and instruments? How to make this information usable and understandable, and how to do this in a tractable way? That means automated as much as possible without maintenance or with minimal maintenance. And so the workflow that we've come up with with our software engineers in the library, which is now operational, is that when we collect open access copies of articles produced by NCAR and UCAR staff, we have a PDF parsing process that looks through them, it looks for all DOIs, and then we're able to use the metadata associated with those DOIs to tell what those DOIs are. Are they to articles? Are they to data? Are they to software? Are those DOIs to instruments? And then we can kind of use that metadata to determine what they are and insert that back into our own systems and then display the related links. And we like this process because services such as Web of Science or other citation services often don't look at the full text, and so we know that they're missing things. And so we feel this is more complete. So this is just a screenshot of the kind of outcome of that. This is a service we've added to our institutional repository. This is a landing page for an article. And on the left, somewhat small text, but there's supporting datasets and supporting software. And that links directly to the identifiers associated with those. So we feel this is a really good value-added sort of service that we're able to do simply because these identifiers exist and they have metadata associated with them. So that was case one. I think at this point I'm going to turn it to Jennifer. Okay, thanks everyone. And so, as Greg mentioned, we have a multi-year initiative right now to instrument a research information management system for NCAR and UCAR. And PIDs are integral to this ambition, especially because NSF and the other sponsors want quantifiable measures of the intellectual, merit, and broader impact of NCAR science. So these are very broad concepts. And we have a long history of collecting and doing collecting publications and doing bibliometric analyses, as well as staying on top of our facility usage. But really that has limited value. It certainly is valuable, but has limited value in terms of representing the work of the center as a whole and its impact. And so, again, as I mentioned, we have this initiative to define new metrics sort of beyond the number of publications and times cited, and to instrument the system for our organization so that we can do some of the work that Matt and Greg have been describing in a more integrated fashion. And so our aspired RIM system builds on our practice of managing the citation record for NCAR peer-reviewed publications. And we also have a custom database for staff activity reporting. We, as Matt mentioned, are long established now in assigning DOIs for outputs other than publications. So for data sets and software, as well as instruments and facilities. However, in spite of good pit assignment, we do have some courage challenges in our system, so there's no common controlled vocabularies, and there just is a lot of customization and stand-alone-ness to the way that we're doing research information management right now. And so one of our main goals for the RIM system implementation is to connect our systems. So a good example, I think, and this is maybe getting a little bit in the weeds, but right now we manage associations between publications and grants and awards within the institutional repository, and we have a custom sort of, I'll call it setup, to query the grants database and align that with publications, match that to publication metadata, and then we manage that relationship sort of behind the scenes in the repository. And with the new RIM system, we're hoping to sort of pivot and have a more centralized location for managing relationships between things like grants and awards and their associated outputs, not only publications, but then other products that stem from sponsored work. And so another ambition that we have here is to have more interoperability of our system with the broader research analytics ecosystem. So for example, right now, we have orchids in our institutional repository, although they're not globally assigned to cross publications. Rather, we have an internal identifier for our personnel. And so this then, it's very good for sort of our own purposes, but it is difficult to integrate with the broader landscape when we're using this sort of proprietary identifier for our publications. And so, I'm like realizing, I'm trying to move along here. So we are in the process of, we have an RFP out for Research Information Management System to build on our practices, and I'm trying to get this slide to advance. There we go. So these are draft graphics to be used in the promotion of our work and try and get uptake. So there's maybe a lot of detail here, but the thing that I would point out is, as I just mentioned, relationships between things like publications and associated data sets or publications and award numbers, that's happening internally to the Open Sky, our institutional repository platform. And then if we are asked questions about it, it takes people's, it takes specialized people's time to answer questions. And this really holds true for pretty much most questions that we get about the merit and the impact of our work. So we can easily answer questions on a number of citations, but once we go beyond that, it requires a lot of manual manipulation. And so again, as Greg was attesting to, a lot of time. And so I'll talk a little bit about our hoped for future state, which is where the RIM system will serve as a hub for research metadata and a place that's where we sort of managed the relationships between the different PIDs, basically. So I was like, the future state title for the slide should be, you know, PIDs make the dream work or something like that. But, you know, when we have multiple author identifiers, this is not a very sort of, there are limitations there. And so one of the ways that we are imagining that PIDs will be useful to us is through broader adoption of orchid identifiers for our own researchers, but also greater awareness about the orchid identifiers that are associated with researchers using our facilities and platforms. And so, you know, maybe a good example here is for the usage of our supercomputer. Currently, there, you know, people can gain access to supercomputing time. And the only way that we are able to find out what came of that is to directly contact those people and ask them, did you create any publications? You know, what came of your time on the supercomputer? And so by leveraging the orchid authentication system, garnering orchid identifiers for our HPC users, and then being able to assert back to those orchid profiles usage of our facility, we're hoping to gain broader understanding of the usage of our facilities because it's sort of going back to what I was saying before, and we have very good bibliometrics. I can answer all the bibliometrics questions the NSF might want to throw at us, but this is really insufficient for our time. And so, you know, what we would like to know is what is the uptake of our other products beyond peer-reviewed publications? What does usage of our facilities look like? And what are the downstream impacts of that? And so, again, identifiers are sort of key to this environment. The last thing I would say is just referring back to that question of sort of what's the system of record for managing relationships? So right now we have these relationships sort of tucked away in different systems, which is great because Matt and I know this, but Matt and I aren't forever, and so we need a sort of more central way of managing this information. Case in point would be our grants information, so we would like to have the associations between sponsored research and its various outputs, not, you know, peer-reviewed publications and the like, managed centrally so that, you know, people other than sort of highly specialized individuals who are steeped in how these systems work are able to answer questions when we get them from our sponsors. And so I will turn it back over to Matt to sort of end this up with a final slide. Yes, this is the last slide. And these are, in some sense, asterisks, right? So we've talked a lot about the benefits of PIDs, and obviously we are fully on board with that, but there are some very practical challenges. I've already said that at the top, PIDs alone do not provide much value. So, you know, I've worked with people to assign DOIs to things, and they never put it on a webpage, they never promote it, and not surprisingly, it has zero citations, you know, 10 years down the road. So assigning a PID itself is not solving anything. A couple points here, inconsistent use of PIDs is still very much a limiting factor to a lot of these services that we're talking about, unpopulated orchids, inconsistent data citations. And to a certain extent, if you base metrics on these or some services on these, you can somehow get, you know, undercounts of things that seem like they're real counts that are actually deceiving. So one example was we did a report for NSF a year or two ago on data citations specifically. So citations to our DOIs for datasets. And we were able to show from 2016 to 2020, it went from about 40 to over 400. We were very excited. We had a 10-fold increase in five years. This is a great thing. When we showed this to our organization of leadership, they said 400 citations in a year is not very much. We're going to take this chart out. So they threw that chart in the garbage, even though we were very happy with it. Because to them, you know, for how many datasets we have, which is just over 10,000, 400 citations in a year is not very much, right? So it's true that that was not the actual number of usage, right? So that was an indicator of DOI usage, not of dataset usage. I mentioned external services like Scopus Dimensions. These are good. They're getting better. They're much better than they were five to 10 years ago. They're still inconsistent. They're still incomplete. And so somewhat hard to use to do these things. And then the last point is the PIDs require management maintenance. You need to keep your DOIs pointed to the right place. Orchids, people's names change, things like that. So there are a lot of things to still work out, but we're hoping we can get a lot more progress in the next five years as we have in the previous five. So stop there and take questions. Is it long? Oh, thanks. So Matt and I were involved in a project where we talked about curation thresholds being a great time to try and gather this kind of information. I'm submitting a proposal, writing a paper, doing my annual report for NSF, things like that. What I find interesting about what you're doing is libraries, research, administration, and enterprise systems are coming together. So I'm going to paint a scenario and you tell me if it's desirable, undesirable, or something in between. I'm a new employee at NCAR. And when I'm in work day and I'm doing my payroll information, you either require me to create an orchid or you capture it. That is the dream. Yeah. I mean, that's not quite true. I mean, look, we're moving to a world where we really need to know who people are. I mean, and if you look at some of the external factors like NSF's requirements on foreign collaborations right now, it's really critical that we know who we're collaborating with. And going back to the previous talk about the use of GitHub, like if you don't ever look and see who your collaborators are in GitHub because you're not going to recognize two-thirds of them and it will terrify you. So that's a very scary thing. The more we can drive people to verifiable identities, the better off we are, the less we have to sort of manage those identities internally, the better off we are. So this has a lot of sort of downstream value that doesn't really have anything even to do with research, but more like a research administration, research compliance. So I have a lot of reasons for really wanting to do this, but federated identities, and this is a type of federated identity essentially. Like to me, this is the way to go in terms of keeping us compliant with all the NSF stuff that's coming in the next 10 years. It's just going to get worse and worse. So yeah, I would love it if people came in and got an orchid. Frankly, I would love it if they authenticated to all my systems via their orchid rather than me assigning internal user names and email addresses, right? So there's a lot of things here that could take so much work off the sort of enterprise side and give research benefits and research compliance benefits. So yes, I agree with that direction. I don't know that we'll get there anytime soon, but totally agree with the direction. Tom Morrell, Caltech. I was really excited to see your kind of enhancement of datasets and software links via PDF scraping. So I was curious kind of what percentage of the publications in your repository were you able to enhance in that way? And is any of that code open source? So to answer your second question first, it should be. I need to talk to my colleague who's written the code. I know he would be very happy to share it. So whether it's in an open source repository now, we should make it so. In terms of how many... I don't have the most recent numbers, but I want to say that, again, this is a one that's kind of gradually increased over the years. We started looking, I think we went back to 2020 because we instituted this in 2021, I think. I think it was around 15% at that point, and I think it's more around 30% now. So it's been fairly stable around 25% to 30% for the past couple of years. Very cool. Thanks. Yeah. Mark Loversweiler, University of Oklahoma Libraries. I'm also going to say that I'm a ringer. I'm a meteorologist by training before moving to the libraries. I was also a member of usercom from Unidata. The question I have is related to the software end of things. And so I know working with that at the time, it was not just people within the atmospheric science community. So the local data manager software package for moving data packets is used by a lot of commodity commercial type companies where reporting for them is not going to be in the same interest or vein for us within the science environment. So how do you work with programs like Unidata, which are code developers, or say your radar group, which comes out with SOLO, right, which is the standard? And we're dealing with issues of reproducibility. That's also coming up, and people don't necessarily track what version of the software they've been using for their stuff. I'm seeing it as a culture change within faculty and researchers outside of your organization. So how do you address that? You have these needs internally, and I agree with everything you're saying and everything, all the problems, but also all the advantages. How do you get that culture change for the users that are not within your ecosystem of NCAR or UCAR when you're dealing with the academic institutions, all the field programs that come up, and then the private industry that's making use of your software as well? Solve all the world's problems. We got five minutes here. I can respond first. So first I'll say that most of the Unidata software that you're referring to has DOIs at this point. Now, it's a great point, though, that many of the users wouldn't be writing papers with the software, so you wouldn't sort of track it that way. And so I think we see the sort of DOI-based citation type counts as a complementary to other types of usage counts to kind of get that aspect of it. In terms of the culture change, I think we're also, we've done a lot of engagement with the American Meteorological Society in particular, which of course is very relevant to our community. So I was on a panel that wrote data citation recommendation in 2015. And more recently there's a similar statement on software citation and archiving, and I know that the AMS publication side, so again, publication focused, has changed some of their author guidelines in the recent years to have data availability statement and software availability statements. So I think the culture part is a community thing, and so I think we really like to sort of work with the publishers, AMS and the AGU, which is the American Geophysical Union, to kind of be the more visible entity in that and to kind of get the results cut more groups. Yeah, I'll just add real quickly, and this is going to sound kind of way out there, but when you look around an organization, you see like 40 units, whatever those units are. When you look at the researchers, if you've got 1400 researchers, that's 1400 more units. So each of them has their own set of processes, their own background, their own interests, their own visions, their own goals, their own mission, everything. When you look at that culture change, you cannot get 40 units to adopt a common process. You're never going to get 1400 faculty to adopt a common process. Thinking that you ever will is just beyond a dream. It's just silliness, right? So what you have to do is you've got to come up with things that are close enough to the most common workflow you can think of that it will allow the most number of faculty possible to use it well. And you can't aim for 100% of them because you're never going to hit them, but you can try to get, like, as much as you can. And that's really the best you can do because they are going to keep having their own individual. I mean, that's what they're there for is to have individual missions and goals. So you can't put an end to that or you've killed the research. I'm Megan Sensany. I'm at the University of Arizona. I've been working with members in our research office to try to socialize, work, and adoption. And I was looking at your kind of blue sky, that might be a pun here, list of integrations for products and systems and then hearing your very pragmatic answer just now. And I'm curious, you know, if you were to look at that list and talk about what you would prioritize in terms of potential for highest impact in terms of moving forward specific integration efforts first. I feel like locally, we've done a lot of low hanging fruit pragmatic approaches to, like, what would be the easiest to integrate. But I would just be curious to hear your thoughts on if you were to prioritize some of those items by impact, what you might prioritize. My quick answer would be the institutional repository in the research administration system, your grant system, like to me and your research information management system. So if I get three, those are my three. And I'll just add that the first kind of integration that we're working on within Greg's unit is Workday, which is an HR system. And so what we kind of want to establish there is what's the system of record to track orchids? You know, we have a system of record for publications and for grants, but if we sort of decide that Workday is that, and then we can kind of build other things from there. So that's where we're starting. That's super helpful. Thank you. And I think if I might add that we could have another thing on our short list of priorities for the orchid authentication mechanism. So we're sort of considering that it's part of our initiative to sort of get broad adoption of orchid identifiers themselves. But that use case that I mentioned about the supercomputer facility and being able to authenticate people rather, I mean, we're creating local user name and password. And so we really want to do that in a best practices way and one that also allows us to feedback to orchid profiles, the usage of our facility so that that is, you know, so that's established for the researchers themselves, but then also for our organization when we go to answer questions about our impact. So yeah. Thank you. And just to add on that, like, just imagine a world where we could buy orchid, we could call out how much CPU time, how much data, I mean, how much storage, et cetera, on an orchid-by-archid basis for the super. Like, that would just be brilliant and we're just nowhere near that right now. So I think we're getting into the break, but let's do one more question then we'll be done. This might not be an easy question for, you know, 30 seconds. I understand why you've approached using DOIs for research infrastructure, like high performance computing and airplanes and things, but it's really a misapplication of DOI and I'm sort of troubled by that because you're kind of pigeonholing something into something that it's not designed for. The question that I have, though, is what are gaps in the persistent identifier landscape that you're facing that we need to develop new PIDs for, like software for research infrastructure that instead of turning to DOI to solve every problem, I mean, we could use the handle system, but let's think beyond the box. Yeah, so I'll comment on that first. I mean, to a certain extent, anything beyond publications is a misapplication of the DOI system. I mean, when we first started looking at data sets, it's kind of a goofy idea, right? They change a lot. If you have regularly growing data sets, the DOI, I think, kind of breaks down a little bit. You get the same data sets in multiple systems quite frequently. So I think any use beyond very stable objects, it's not a perfect fit. But the reason we're using them for that purpose was because we had some of the same goals as we had for data citation, which is, can we see who's using these things? Can we track who's using these things? We've had a really difficult time in doing that. And can we find a mechanism to try to do that? So that's why we're using DOIs. I agree it's a strange fit. But again, I think that's true for a lot of other resources. And what we're trying to do with this grant that I mentioned is to build use cases for different kind of identifier schemes. We're looking at research resource IDs also in that context, which were more created for that purpose, as well as other identifier schemes to say, you know, we have use cases for citing facilities and identifying them. And we have identifier schemes and their capabilities. How do these cases in the identifier schemes match up? So that is one of the goals of this project that I mentioned is to try to get to, you know, is the DOI the right thing to use for these? If not, what's a different and better scheme? Identifier scheme to do that. And like very specifically, if you had a database of your institutional research assets, every column in that database, that's what we need PIDs for. So that's really the short answer. If you can imagine every possible research asset, we need to be able to identify all of them. In terms of your second question, or whatever the other question around, what PIDs do we need more? I would kind of almost push in the opposite direction that I think we need fewer new PIDs at this point and greater connections between the existing PIDs. And, you know, to your point, figuring out which PIDs are really good for which purposes. There were so many PID systems that have been started up. And I think we haven't connected them and really figured out how they work well together. So I would be hesitant to promote any new ones at this point until that's been sussed out a bit more. And I would add on in sort of, in defense of our approach, we began doing this work almost 12 years ago, I think, and we initially signed an ARC to the supercomputer at the time and with the next generation moved toward DOI because DOI had, what am I going to call it, like brand recognition and traction with our researchers. And so in order to encourage the assignment of these persistent identifiers and encourage citation of persistent identifiers, DOI was really sort of an obvious inroad in our community to sort of say they already had established understanding of why, you know, DOI's worked, we'll say. And so it was an entry for us as we got into this space, although now understanding that maybe there were some finesse required. But I think this is a great point of what I was trying to say before, which is the use case was citation, right? DOI was very visible in that context. Whereas in the institutional repository, as I mentioned when we use ARCs, I mean, you know, we want people to cite those, but that's, the use case is more for persistent location. And so we're not concerned about metadata. We're not concerned about citation aggregation as much. And so the ARC is perfectly fine for that. So I think that the point about use cases and identifier schemes and how they match up is really important here. All right, we should stop there. Thank you, everybody. I really appreciate your questions. Thank you.