 Diane Goldenberg Hart, I'm with the Coalition for Networked Information, CNI. And you have reached part of CNI's spring 2020 virtual membership meeting, and we're so glad that you could join us. Today we have a presentation from four members of the staff at Penn State University, including Dan Coughlin, Anna Enriquez, Cynthia Hudson Vitale, and Brandi Carl. And they are joining us here today to talk about Beyond the Repository, Integrations to Support Open Access Policies. Before I hand it over to our panel, I just want to call your attention to a few features of the webinar environment here today. One is that there's a little Q&A box. If you click on the Q&A button at the bottom of your screen, a window will pop up and you can type in your questions or comments in that box at any time. We'll wait until the end of the presentation to answer those questions. I'll come back and moderate the Q&A, but feel free to type those questions in there at any time. There's no need to wait until the end. There's also a chat box. If you click on the chat button, you'll see a chat box pop up. You're welcome to use that chat box to ask questions also or make comments. And we'll be using that chat box as well to send out pointers and links and other information throughout the presentation. So without further ado, I want to thank you all again for taking time out of your day to join us. And I want to welcome our presenters and thank them for being here. And with that, it's over to you, Brandi. Thank you, Diane. So today we're talking about the Penn State Library's work on repository integrations to support our brand new open access policy. And Dan, could you go to the next slide? We want to start out with a little background on this brand new policy and all the integration work we envisioned from the start. I will cover that. I'm Brandi. I'm the head of scholarly communications and copyright. Dan is going to cover the technical infrastructure. He's the head of library strategic technologies. And Anna will next cover the social infrastructure. She is our scholarly communications outreach librarian. And Cynthia is going to wrap up our talk today with our future plans, including assessment. Cynthia is the head of research informatics and publishing. So that's just like our little roadmap for today. Several years ago, our provost charged a university-wide task force to create recommendations for an open access policy for Penn State. On the task force, we had members that were non-libraries faculty and libraries representatives and actually a research and a data person who's not a faculty member. They're just members throughout our community. The task force wrote a report on the state of open access at Penn State with a broad slate of suggested changes, as well as a suggestion for a Harvard style open access policy. If you've seen Harvard's, you've seen basically everyone else's. And the policy covers scholarly articles with a waiver mechanism for individual articles. And our goal really was to increase the accessibility of Penn State produced research while also providing benefits to faculty, like wide and open availability of their research, increased citations, and research impact. Both the task force discussion and the report included technology implementation as a way to enable success for the policy. So it wasn't just about the policy. Technology is something that we considered really vital to the entire process from the beginning. Dan was also on the task force and led discussions on how we might develop technical solutions, like we're going to talk about today, to a really persistent open access problem, which is getting faculty to deposit the right version of their paper at the right time. We can't make their research openly available without getting their papers. So what we determined was that creating systems to make our article deposit was really necessary in order to achieve our goals. And we recommended mediated automated deposit of scholarly articles into the appropriate repositories, the creation of data and altmetrics collection processes, and overall we wanted to minimize individual researcher time. While creating additional compliance requirements, which is necessary because we're asking them to do something new, as much as possible, we did not want to burden faculty with the logistics of complying with the policy. What we wanted to do was to create processes that are streamlined, automated, transparent, and that required minimal effort. And this is a really big goal, big and lofty goal that we're continuing to work on. Beyond the technical components needed to be successful at implementing our new policy, we also considered the cultural integrations and social infrastructure necessary to be successful. The report expected that the libraries would create a plan for educating our community about the policy and all the details of implementation. And ultimately, it's a change to researcher workflows and we really consider the need for outreach and education as going hand in hand with this new policy just like the technical infrastructure. Next slide, Dan. So the policy was discussed on and voted on at our faculty senate of our two sessions in spring of 2019. After a very long approval process, it was approved in late 2019 and went live on January 1 of this year. Also of note, in addition to our open access policy, we do have an associated guideline, which is not a real policy, it's just an encouragement that was approved to encourage the deposit of presentations, unpublished reports, works in progress, data set software, anything else that is not a scholarly article. And they're not subject to the open access policy, but we wanted to encourage that as much as possible and to use the repository to do that. The open access report suggests that Penn State updates its policies to better encourage and value open access activities. And that really meshed with our ultimate goal of increasing access to all types of Penn State and scholarship. The policy itself is, once again, like very many other policies you've seen, it requires researchers to provide a copy of their accepted manuscript to the libraries to be made publicly available. And we have several pathways for that, which Dan and Anna are going to describe in lots more detail. When the policy became effective on January 1st, beyond our usual manual deposit for our institutional repository, additional deposit options became available, our waiver system, and our new open access website all went live. So besides the institutional repository, there are three main parts of our new system. Main postprints, number one, when faculty add publications to our faculty tracking system. Number two, publication discovery in our new library's metadata database, that's LMDDB. Dan will correct me on that one. And number three is the waiver system. So the database gathers publication information from lots of sources, which Dan will detail, and it queries an API to discover the open access status of publications. And eventually, later on this year, we'll start to email to remind authors to deposit postprints or to submit a waiver. We still have a lot of hurdles to overcome. We did go with that Harvard style policy. But we've already discovered we may want to recommend changes in the future that will make the policy a better match to our implementation. We're also working really pretty intensively on assessment planning to discover other changes and improvements that we might want to recommend in the future or how to discover those things and also to track our success. We also continue to work on both improving the technical and cultural integrations that are designed to increase the amount of research we can capture and make available to the public. So with that, I'll pass it to Dan. Thanks, Brandy. And in Brandy's defense, I think I've changed the name of what we should be calling the metadata database about monthly so I can see why it can be confusing at times. So I'm going to talk about the technical infrastructure that we have at Penn State to support the open access policy, why we did some of the things we did, some of the challenges we have or have had with the system. So we're working with a variety of vendor solutions and locally developed software. We have ScholarSphere, which is our locally developed institutional repository, the research metadata database. It's essentially a citation database of Penn State research and publications and a host of other products. I'll talk about more detail in my talk. So initially we set out with a goal of saying we'd like to have 25% of Penn State's research openly accessible as a five-year plan. It seemed like a bit of a stretch goal based on some of the articles we had read and the research we had been doing. So we thought if we put the right technology in place and had the proper outreach, we should be able to obtain that 25% by 2025. So then we started asking 25% of our research, what does 25% of our research mean? How are we even measuring that number? Where does it come from? There are many sources of research output and scholarship. So how do we know when we've hit that number? Ultimately, we can only measure what we know. So if we're holding ourselves to a number like 25%, we need to measure it based on some number that we're aware of and how do we know what research output is in terms of publications. At Penn State, we have several systems that have records of faculty activity. We have pure product from Elsevier. For those unfamiliar, it's essentially an application that sits on top of Scopus data, which is Scopus is another Elsevier product, which is in simplicity a database of publications from a university. So we had a certain number of publications that Elsevier provided. We had Clarivate's Web of Science that had a similar set of publication or citation data for researchers that we could access. So we found the Web of Science data to be a bit cleaner in richer in terms of the metadata we were looking at. In particular, we were able to link to grants as well with publications. And then digital measures from Watermark. It's an application of database of faculty reporting. So we're pure in Web of Science show primarily publications. Digital measures a system that has more information related to faculty members than those two databases. It has things like student reviews of courses, biographical information. Additionally, it's a system that faculty control the data. So they put that data into the system and that data is then used to create things like an annual review or a university dossier. So from these systems, we said, all right, this might not be 100% of the publication output at Penn State, but currently it's 100% of what we know. We know there are things that we don't have as well, but we can't measure those. So as a first swipe, this got us a sizable amount of Penn State citations. We knew there'd be a decent amount of work in querying these vendor supply systems. We know they have valuable data for us to measure our open access content. And we know we want to continue to monitor that data in these systems to continue to be aware of how much research output there is going forward and how much we know is openly accessible. So many of these vendor supply systems, similar to the ones we have, may market themselves as the single data source you'll need to measure your research output. We viewed each of them more as a source rather than the source. So therefore, we wanted to bring all of the content together into a single database. And our next step was to create one aggregation of this information. And I realized there's a bit of hypocrisy in saying no single database has all this information, so we'll create our own database to have all this information. But we purchased the data. It'd be good to have it for our own purposes, in this case, promoting open access. Additionally, once you get a script working to import all of this, it should work on future imports. So the leg works largely up front, and this is a long term play with open access. So we're willing to invest that time up front. So we created a system, the metadata database, to aggregate all the data from these systems, and then be able to manipulate it for our open access needs. And so after our import of citations, among other things, it's worth noting we've imported content from other systems as well, so that this can help units on campus have a single source to query for faculty profile pages within their units. But you can't really see, but the second to last line on my slide has publications. And so after we brought in all of those data sources, we have 240,687 citations from these three systems. And we felt pretty good about the quality of this data in terms of being attributable to Penn State researchers, and it had rich metadata that in many cases included a DOI. Because of having the DOI, we were able to take our data set and query it against open access button and then find out which of our publications were available in open access repositories. So there was some discussion within our team on should we download these files and put them in our institutional repository to increase the amount of content in there. And at least for now, we actually decided against it because it doesn't really expand the footprint of our open access content. It just expands the content in the IR. We did find 45,548 open access articles or links to open access articles from our publications. So that's about 19% of the 200-some-thousand articles that we had. So the goal of 25%, three, five years out, we're pretty happy with starting year zero in January once this policy went into place. We were at 19% right away. Also give us a good measure to see how effective our policy is in increasing open access at Penn State. We'd already had almost 20% of our scholarship in open access repositories. We just didn't know about it. So this number is a moving target. If we had another source of data to increase the coverage in another domain that may not have as much focus on open access, then we may see our numbers fall. For this reason, we're snapshotting our databases, which is just taking a picture of the data each year so that we can analyze it and not only compare the numbers year to year, but our hope is to find better ways to analyze why those numbers are changing. And I've thrown a lot of application names at you. So just quickly reviewing those names and their intentions of the applications I've mentioned with the metadata database, which is internally developed as a citation index of our research publications at Penn State. It includes a link to open access content if that link exists. Scholar spheres are institutional repository that's internally developed. It's where we hope to store open access publications that aren't already in other open access repositories as well as open data, et cetera. And we have digital measures, which is a vendor-based solution for faculty activity reporting. Our intention, as you said, is increasing the access to our scholarship. The first step was discovering that scholarship. The second step is finding out what is already openly accessible. After seeing what's accessible now, our step is we're able to try and increase that footprint of open access scholarship by gently nudging on researchers' shoulders to provide us a copy of their publication that they have. That they have the right to make openly accessible. So we came up with three places where this could happen, where we could ask faculty, hey, can you provide us this content? And the first one is adding content via digital measures. Our university's central solution to the faculty activity reporting for annual reviews and promotion and tenure. Faculty already have a defined workflow there where they report their activities for the previous year. We use digital measure to capture coursework, grants, advising, all types of faculty activities, including publications. So for those not familiar, it's a vendor product from Watermark. And they were purchased, it was just recently purchased by Watermark. So it's kind of going through a bit of a rebranding change. But that said, we're able to work with the vendor and ask them to add fields that you see on my screen to the form where faculty add their publications to give faculty an opportunity to upload the corresponding postprint, right? So eventually we hope to automate this, but currently we have a very manual process in terms of getting that article from digital measures into ScholarSphere. We run a report, essentially we get a list of the files that have been uploaded and to digital measures with the corresponding metadata. So the author, the publication information, a link to the file. And then we have someone on our team and act as a proxy to upload that file into ScholarSphere and update our metadata database. So we know that we have an open access copy and link to it. So we put this in place in mid-December and there are varying degrees of skepticism on how successful this would be. So far, we've had a lot of uploads to digital measures that are not proper files. So really what we're working on there is some user training to work through that at this point. The second way for faculty provide open access scholarship is by logging directly into our metadata database. Once logging in, they can view their list of their publications and see what we have identified as the publications known to have an open access version. The open access obligations has been waived for this publication. An open access version of this publication is currently being added to ScholarSphere. Or the open access status of this publication is currently unknown. Click to the title to add an information or submit a waiver, right? So from here, faculty are able to upload an open access copy of a publication that may not exist. Or they can provide a link to an open access copy if they have that. And the third way researchers may satisfy this our open access policy is for them to add content directly to our IR, which is ScholarSphere, with the intention of meeting the open access requirement. This is perhaps the most obvious, but also currently we don't have a way to automatically track this. So again, our manual process for this is to look at a report of IR uploads on a weekly or monthly basis and then see which of those look to be post prints or content that appears to be meeting our open access policy. Then create a link between this content and the metadata database. So future development for IR includes an OII, PMH, metadata feed, and that'll allow us to register our IR with open access button. At that point, anytime we do a data refresh to find open access content, content in ScholarSphere will be automatically picked up. For now, it's a pretty manual process. At this point, we don't have a catalyst behind getting faculty to log into either ScholarSphere to upload content or to log into the metadata database to upload content. Digital measures is part of their annual review, so that's already in their workflow. I'll be a bit begrudgingly part of their workflow in many cases, but future directions for this is to develop an annual or semi-annual email that lets faculty know that we're aware of a publication they have and it does not meet an open access or it doesn't have an open access copy and how they can log into our system and provide that copy. And so we hope to have that later this year. Our ability to go out and get citation listings and then find the open access copy content, rather, is something we're particularly proud of and how that work can continue to happen year after year. So we have a multifaceted approach of sorts where faculty can meet the requirements of this policy in a number of ways. We also believe the more avenues we have for faculty, the better. The more we can reduce their burden, i.e. within digital measures workflow, automatically finding their existing content, the better. It's worth mentioning at this point, we have a very minor integration with ORCID, but we're looking to have more meaningful integrations with ORCID down the road. Cynthia will talk a little bit about that. And our next phases are largely around integrating these three systems between the metadata database and scholars here. And so this work can be done with a little less intervention on our behalf. And next, I'll turn it over to Anna to discuss the outreach. Thanks, Dan. I'm going to talk now about kind of the social component of the implementation. We could call it outreach. We could call it liaison work. So in developing the technical implementation of a policy that Dan has just described for you, we're trying to simplify compliance for researchers, but we ended up making a fairly complicated system. One thing I do at Penn State is support compliance with federal fund-republic access policies. And as I'm sure many of you know, the NIH has four compliance mechanisms, A through D, in their public access policy. And I used to chuckle a little bit about that when I would teach workshops on the policy. But when we wrote up our documentation for our system, we started with three compliance mechanisms, as Dan just described. And now in our documentation, we actually have four. I'll say more about that in a minute. So for some people, you know, one option is simpler. For others, a different option is simpler. And I think we have that complexity for some of the same reasons that the NIH does. In part, the researcher's decision about where to publish their article is going to determine which compliance option is going to be the best fit for them. That's true for NIH options as well. We face a couple of additional challenges that push users into different compliance paths. One is that for that faculty activity reporting system that Dan was talking about, we have varying usage of the system. There are a couple of university units who are just starting to use the system, whereas others have been using it for a long time. So even among faculty, not all of them are going to want or be able to use that mechanism for making their work open. We also have a pretty broad open access policy. It applies to many different user types besides just faculty. So it applies to staff, but it also potentially applies to people who don't have university email addresses, who can't authenticate into our systems. So this means that I have a lot more sympathy now for the NIH's for compliance option strategy. And it also means that our outreach goal is twofold. We want to make researchers aware of the policy and how to comply. But we also want to help them identify their easiest deposit option, ideally without showing them the full complexity of the whole deposit system, the whole technical implementation. Most users don't need to know about the full picture. They need to know about the one path that's really easy for them to use. Next slide, please. All right, so I'm going to tell you about three different sets of tools that we're using to do this outreach. One thing, one tool is the implementation website. Another is our open liaison program within the Penn State Libraries. And the third is series of open access workshops and our open office hours. Next slide, please, Stan. So one challenge in aligning in our implementation has been aligning it with the policy text. Brandy mentioned this a little bit. What I like to say is that our implementation is fully compatible with the text of the policy, but it's not a perfect match. So for example, the policy says university researchers will provide an electronic copy of the author's accepted manuscript to the Office of Scholarly Communications and Copyright within the university libraries. And that sounds worryingly like you should email a PDF to Brandy and Anna. And that's the last thing we want them to do. So one simple thing we did was add hyperlinks from the policy on the official policy website where a lot of people are actually reading it and have those hyperlinks point to our implementation website. So this is a fairly simple site. It primarily describes how to deposit, which the site refers to as how to make your work open. And it also talks about how to get a waiver of the policy. There's also a FAQ contact page, et cetera. On the make your work open page, researchers are confronted with those troubling four options that I already mentioned. So those are the four options that are listed here on this slide. The first option describes the process for uploading via the faculty activity reporting system. For faculty who are already using that system and whose articles are not already OA, that's the simplest option. And that probably represents a plurality, but I think not a majority of the articles that we want to see uploaded. Or I should say the articles covered by the policy. However, if the article's already OA, so for example, if it's green OA in a disciplinary repository, someone uploaded their postprint to archive, or it's already in co-authors IR and another institution, or if it's available OA from the publisher, the author is going to be better off going to the research metadata database, the second option on this list. First, because the metadata database is going to know if it's going to know if the work is already open access. So that's why I corrected myself a minute ago. Some of these things, we don't actually want people to upload at all. We just want them to point us in the direction of their work. Or ideally, we want to have already found it. So for people whose stuff might already be available, going to the research metadata database is a better option because they won't have that extra work. However, non-faculty can't use option one. Some non-faculty can use option two because they might have previous publications that have shown up in pure, and therefore they have a profile in the research metadata database. But then we have some staff who can only use ScholarSphere, and we also have potentially others who are not staff but are covered by the policy. And they actually can't use ScholarSphere directly because you need to have university credentials for that. But ScholarSphere uploading is still going to be their best option. They'll just have to get in touch with us about it. So in group presentations and individual reference inquiries, I try to tailor the way I talk about the deposit mechanisms to match the thing that's most likely to work well for the particular audience. I mentioned that we had to add a fourth compliance option to this list shortly after launching the website. So in early January, we launched with just one, two, three. But then we added using another OA repository or an OA publication method of your choice. So as Dan explained, if you use an OA repository or an OA publication method, the research metadata database is actually going to find those things. So if we're being precise, this is a variation on option one. But when we started doing outreach on this, it became clear that presenting it this way was really confusing to some researchers. So we made option four specifically to attract people who are thinking about, I've already done this OA thing. What do I need to do now to comply? And what we're saying there is essentially hang tight. You've already made it open access. You don't need to do anything else. Wait for us to find it. And also, crucially, let us know if we don't find it, because we're continuously trying to improve the system. And if we find that we're not finding things that are being published on an open access basis, we certainly want to know about that so we can expand the sources that the research metadata database is checking. So having a page for that option gives us a place to go into more detail about the research metadata database. Talk about how it works. What sources does it use? How quickly should we expect it to find an article that's recently been published? And that information doesn't belong on the page for option two, because option two is going to attract a lot of people. Most of them are not going to care that we're using unpaywall to check for the open access version. And they're not going to want to know how to check that the disciplinary repository where they posted their postprint is among unpaywall sources. But somebody who's in the habit of using a particular disciplinary repository, that's really useful information for them. And the same goes for someone who always publishes gold OA. They may want to check on the journals where they frequently publish or the journals that they edit to see if we are finding those when we look for OA versions. So essentially, this fourth option is for power users. It's for the people who already know what they're doing. And we can go into a little more detail on that page. In addition to all those ways of depositing in compliance with the policy, we also have the get a waiver option on the same website. It's pretty prominent alongside make your work open. So when a user clicks on the get a waiver link, they're forced to authenticate so that we can either send them into the research metadata database where there's a waiver form. Or if they don't have a profile there, we send them to a backup form. And this is another example of how the implementation is differentiating between users in order to deliver the simplest possible experience for each of them. Next slide, please. All right. So I also need to say a little bit about our open liaison program. Many liaison librarians participate in this program. It provides training and support for working on open access and other open issues, open education, open data, et cetera. This program started a couple of years ago while the open access policy was still being drafted. Having it already in place when the policy came up before Faculty Senate was approved, went into effect, was really helpful because those folks already had that training. Participation in the program is voluntary. And in many cases, there's just one open liaison acting as a representative for a larger group of liaison librarians working together in a disciplinary area, such as the sciences, or at a particular location, like one of the campuses of Penn State. So not everyone who has liaison duties is part of this program. But that has pluses and minuses. Works well in the sense that not everyone has that additional training requirement, essentially. But it's also challenging because not everyone is getting that additional training. Can you go to the next slide, Dan? So alongside the open liaison program, our open access workshops are another helpful tool in the outreach toolbox. So I provide template materials for two types of workshops. One is on the Penn State open access policy, and one is on the open access movement in general. They also both come in short and long versions, short being five to 10 minutes and long being an hour or two, an hour and a half. I teach workshops like this that are open to the public, the whole university community. But I also do customized workshops, alone or with others, for university groups. And I support liaison librarians in their outreach efforts so that they can adapt these materials to meet local needs. Since this January, I've given four of those public ones with a total of about 100 attendees. I've also co-presented or presented five customized workshops, including two for councils as administrators across many university colleges. That was a nice opportunity. At least half a dozen of our colleagues have also presented about the policy, including two groups involved in faculty governance. So we've been really getting the word out. It's been especially helpful to have a very short spiel available. In April, I gave a couple of presentations where I actually managed to share a five minute time slot with two other colleagues. It's helpful in that time crunched situation to have the longer public workshops scheduled and available so that people can say, oh, that was actually interesting. I need to know more. I'm going to go to this workshop. Or to be able to refer them to the website. The other thing that we had been doing, that we stopped doing, was offering drop-in office hours about the policy, where Brandy and I would staff a Zoom room for an hour and publicize our availability to answer questions. In total, from January to the end of April, one person dropped in over all the different times we did this. So she was actually a library colleague who had already been very proactive in learning about the policy. And so we stopped. We decided that it wasn't a good use of time to keep offering them. I think it was nice to have them to begin with, even though we didn't get many or really we just got one taker on attending these office hours. Because being able to say that they were available was really helpful. We also get a couple of reference inquiries a week about the policy. And we answer those via email and do consultations as well. We set up consultations in part to be the replacement for those set office hours. It's much easier to manage scheduled consultation appointments. So now over to Cynthia to talk about future plans. Great. Thank you. So Danny has already mentioned a number of the future plans. But we also felt it was important to pull this information out and highlight how we're really are moving forward. It was important to us to meet and exceed our needs for a user-centered solution that really didn't add additional burden to faculty over the next few years. And to do this, we will be continually assessing and improving our solution. Thus, our process in a workflow is not meant to be over and done. And this is all we're doing. But rather, it is iterative. And we really, really seek to and prove upon it and refine it over our time. So our next one at development will focus on a variety of distinct feature enhancements, some of which are already in development. Some of this hadn't started in March. But since that time has come up. So the first enhancement we will focus on is pushing metadata about known open access works to individual ORCID profiles. Given integrations between ORCID and a number of existing profiling systems and platforms, this auto-population of researchers' ORCID profiles will ease future publishing in journals that require an ORCID, as well as the building of funder-required biosketches. So we really see this as not just benefiting us at Penn State and pushing forward OA, but also contributing to the research ecosystem that a lot of our faculty work within. So we will also be developing an email nudge program for the implementation workflow. This program will send an automated email to every faculty member who has published a known article in the last X number of months, but has not provided a link to an OA version or a PDF of the article. So this will essentially send an automated email to somebody which we have identified through our metadata aggregation from Pure and the other technical platforms that Danny spoke about. And then ping them to request an OA version. Finally, we hope to automate as much of this workflow that is manual as possible. As Danny mentioned currently, for anything that's not uploaded through digital measures, we have to manually download it, review for appropriateness and accuracy, really try to take an educated guess as to whether or not it is a postprint, and then upload it into our institutional repository. And then also include that metadata over in the researcher metadata database. So the workflow and the workload for this process has been manageable so far. And as an aside, we have noticed a number of non-postprints being uploaded to the platform and have added a step to our process to reach out to the author to obtain the appropriate article version. After we release a new version of our institutional repository, hopefully in August of 2020, we will have APIs. So we will be able to better connect to digital measures and the researcher database. There will always obviously probably be some sort of human component or checking or verification. But we are also exploring some other solutions that can do things in an automated fashion, such as scan a PDF for any journal names or something like that that might indicate that it's actually a publisher-subscribed or a publisher-provided version. Next slide, Danny. So as you've heard from my colleagues, much of this infrastructure was just launched in January. But we are currently establishing a multi-pronged approach to assess the success of the OA policy, the technical implementation, and the impact of Penn State's OA policy in the larger scholarly communication ecosystem. As has been previously noted, we found on average every year around 20% of Penn State research is available OA. In August of 2019, we conducted an analysis of OA articles for PSU authors for 2018 publications. And from this, we actually know that more than 2,800 PSU publications from 2018 are open access. This is more than a quarter, which is really exciting, of 28 publications with a PSU author. I should mention that this is a pretty fuzzy number, though, as the calculations does not have 100% accurate denominator. And this number about 2018 is extra fuzzy, because obviously it includes a number of articles that were just made available OA through embargoes. So we will continue to calculate the OA articles on a yearly basis to really track this trend and the impact. But it was really important for us to have a baseline going forward. Around the yearly mark, so next January or so, we plan to reach out to faculty at various points along their promotion and tenure timeline, and across departments and colleges to conduct user testing on the workflow, wording, and design of the various components of the technical implementation. As Anna noted, we've already been iterative in the different methods we've put in place for compliance, but we're always looking for ways to improve what we've established. As all of you know, and this is probably very much talking to people who already agree, open access is about providing equitable access to information that is otherwise locked behind paywalls and inaccessible to those who can't pay the fees. And to better track this goal about equity of access, we will also be exploring and building upon previous research to understand the broader impact of the Penn State open access policy. To do this, we plan to conduct a citation analysis of a sample of PSU author works, both pre and post-OA policy, to understand how and if this broader access results in more use and citations of the research. I think it quickly became apparent in the current public health climate that open access to research, data, and other research outputs is key to advancing scientific discovery and solving real world challenges. Through our new policy and robust implementation program, Penn State is confident going forward they're a key contributor to this open ecosystem. And we've been very welcome and pleased to share our implementation program with you. So with that, Danny next slide. We thank you very much for coming and hearing about how we're handling this and we're curious to hear your questions. Thank you. Thank you. Thanks to all of our panelists for a really interesting presentation. And I was just looking at the policy or I guess it's really the recommendations that was posted exactly a year ago and it's amazing what you all have accomplished in that amount of time, frankly. So thank you for coming and talking to us about this and I wanna remind all of our attendees that there's a Q&A button at the bottom of your screen where you can type in your questions or type them into the chat and our panelists will be happy to address them now. And also just to let you know you can raise your hand in this environment if you'd like to make a question, ask a question live or make a comment live, feel free to do that and I can unmute you. And so let me just move in right now. We have a question already from Kristin Weishadel who asks, how are you able to get institutional buy-in for this policy? Hi, it's Brandy. I think I'm gonna take this one. So the provost really led the charge on this and charged the task force with making Penn State a leader in the open access movement. So I think everything really flows from our Central University administration wanting this to happen. And as I said in the beginning, we had a very long approval process but ultimately the report went for discussion at the faculty Senate and that was really important. I think the other thing that really led to more buy-in is that we had solutions and that we were thinking about solutions and making it easy for the faculty to comply with the requirements of a future policy, which is now our policy. I think that's really the most important thing and it's quite different than other institutions' experiences. You know, 10 years ago, policies were made and then everyone figured it out and we had the benefit of figuring it out as we got the policy approved. So it's a little different now than it was passing a policy 10 years ago. Yeah, that's a really good point. Thanks, Kristen, for bringing that up so that we could kind of clarify that and thank you, Brandi. And along those lines, I guess I was thinking, I think you mentioned Brandi during your presentation that although you followed Harvard's model or something similar, you may be considering some changes to the policy. Do you have some changes in mind already? Are there things that you're considering or any kind of evaluative exercises yet about what's working and what isn't? Well, Anna may wanna jump in on this as well. While we see the implementation that we have, those kind of four pathways is compatible, we think that the policy could be more clear on basically seeing any open access version upon publication as okay. That is, you don't have to supply it to us directly, that we just want to, as Danny said, expand the footprint of what is available, right? And that really is our goal. Our goal is not necessarily to have it for ourselves but to have it for the world. So we think that it could be improved with some clarity, but also at this point it's not clear if it's worth that kind of engagement with updating it. But we think that we'll know more in a few years about where we want to recommend changes, although most of our changes I think are gonna be on the implementation side. Yeah, I'll just chime into it. So I think there are a lot of places where the policy is in a sense more strict or I mean really just not precisely matched with what we want, right? Like one of the things is it says you have to provide the author's accepted manuscript. Well, if I publish in a Goldaway journal and I wanna provide the final version, technically the policy does not say that's okay. But of course we're happy for that to be the version that you provide if you have the right to provide it. So there's some stuff like that. The other kind of change I think that we might want to make down the road and it's too early to say whether this is a good idea but basically with the waiver and I see that there's a question about it. So I'll try not to answer that question quite yet but our waiver is a per article waiver like the Harvard policies waiver and we very much bought into kind of the logic behind having a waiver that's part of the Harvard model policy and you know, Stuart Schieber and Peter Suber comments on the model policy. And seems like perhaps that is not actually how things are working at a lot of institutions today. And the landscape of how publishers treat attempts to negotiate on the part of an author or how publishers treat an addendum, it's really different from 2008 when Harvard was passing its OA policies. So I think that's another area where we might need to think about changes. Okay and well, while you mentioned waivers since we do have that question, can you go ahead and clarify? This is from Laurie Smith who's asking for someone to please explain the waiver and the need for it. Yeah, sure, so well, this is a waiver on a per article, per publication basis. So when we say, you know, fill out the waiver form it means that an author is gonna go in there and put at a minimum the article title and the name of the journal. And then we hope we will be able to match that with the citation when we find out that it's been published, you know, when it comes in from Scopus or when they put it in Activity Insight or comes in from Web of Science, we hope that we'll be able to tell that thing where you use the original title that you later changed and you misspelled the name of the journal is the same as the thing that you actually published, you know, that we'll be able to match waivers with metadata. And then we're gonna not send them an email, essentially, but on the legal side of things, the waiver is a waiver of the non-exclusive license that the policy causes an author to grant to the university. And this is an area where there's a fair amount of heterogeneity among institutions and an area where following Harvard's example would have gotten us into trouble at least because Harvard considers scholarly articles written by faculty to not be works made for hire. And now I'm going down like a copyright rabbit hole, but they don't consider those to be works made for hire at all. They subscribe to the teacher exception, which is this old common law thing, happy to talk more to anyone about this, but I'm sure you don't wanna hear it. But Penn State and a ton of other institutions think that those things are works made for hire. And so the structure of who starts out as the copyright holder and who's licensing what to whom is different at Penn State than it is at Harvard. And so that part does work a little bit differently for us. Okay, all right, that was helpful. Thank you very much, Anna. Okay, and now we have a question from Hannah Frost. What are the core critical roles or positions needed in place to get OA off the ground at a university? I really think that's gonna differ from place to place. So I wanna make a point before I answer that question, which is lots of places have open access policies without a technical implementation and the policies are not very successful. And it's possible to also do some of this work without an open access policy, right? That you'd have to get a license, right? But you can ask researchers if you have a faculty management or reporting system to deposit their postprint for deposit in the institutional repository when they report about it. You don't actually necessarily need an open access policy to do that. It helps a lot, right? But you can still ask them and then go through a licensed granting process as part of that. But we really think that the technology has to be there. So there has to be either a team or someone that manages the outsourcing to create the technology. And there has to be someone to educate researchers about what they need to do in order to comply. And I think those are the two critical roles. And Anna mentioned that she also educates a lot on public access and her role grew out of discussions that we had before this open access task force in the libraries that we needed this role to be part of our plan on helping researchers comply with funding mandates. Right. And actually I was curious about those workshops, Anna. Who tends to come to your public workshops? Is the faculty looking for more information on this policy or who does it attract? Well, it's interesting to think about that question in combination with the, I do workshops on the public access policies question because I've been doing those workshops for a few terms now. And I was able to publicize those by having the sponsored programs office send an email to everyone who has or has ever applied for or ever had an NSF grant at Penn State or an NIH grant at Penn State. And so all the PIs pass in present and potentially future. So that's a really good list to have someone email your workshop to. And when the open access policy came into effect this January and I was putting together my spring slate of workshops, I got them to plug the open access workshops as well. So from that list serve, we do get faculty coming to the workshops, which is fantastic. And it's hard to get them to show up for my other kinds of workshops like on copyright and stuff. But because it's research compliance and because I keep them as short as possible, I do get faculty. And then also I get kind of research administration support staff who also play a pretty big role in the public access policy compliance. And in some cases, it seems like they're going to play a role in the OA policy compliance. That's kind of an ironic thing about both the federal public access policies and the Penn State one is that they're designed for the author, the scientist to do the actual work of uploading the article. And yeah, sometimes you can have a proxy do it for you but we entertain this fiction that it's the scientist and often it's not. And how is someone supposed to get a copy of the postprint if they were never involved in the drafting of the article? So there are some interesting complexities there that I uncover those workshops. Yeah, that's really interesting. So you're working with the Office of Research. That's a great strategy and what a marvelous way to find your audience. And speaking of working collaboratively across the campus, I was thinking about IT. I mean, I think Dan mentioned that there's some integration built in, potential integration with your system. So I was wondering to what extent are you working with central IT or was central IT involved in putting together the system and having all these components fit together? Yeah, actually our central IT unit wasn't really involved anymore than like some of their systems just provide access to their data. And so it was more of just like a very informal relationship from that perspective. And in some cases, we mentioned the ORCID integration. Actually our central IT unit wrote an application so that you can link your Penn State access ID with an ORCID ID. So I mean, they definitely have a role but like in terms of day-to-day work or our partnership in this specifically wasn't as big because a lot of these applications are either written out of the library or pure, for example, is out of the Office of the Vice President for Research. So a lot of coordination with a lot of units on campus. Yeah, definitely. Well, really interesting to hear about your process and you clearly have a very robust mechanism in place, a great model for anyone who's interested in exploring something similar. It was in the chat box here and yeah, it really was a marvelous presentation. So thank you all, all of our panelists. Thanks to all of our attendees for being here and we hope we'll see you at another CNI webinar soon. Thanks so much.