 Hi, everyone. We'll get started in just one moment. If you could please take your seats. Thank you. Welcome to our final session of day one. I'm so happy to have you all here with us. For those of you who have not met me, my name is Paige, and I am CNI's communications coordinator. And I have the honor of introducing our new experimental and, I think, very fun and informative lightning round session. So this will feature seven minute back-to-back presentations. And I just wanted to review a few housekeeping rules before we got started. So first of all, please hold your applause till the end of the entire session, as opposed to between each talk. Secondly, we won't have time for Q&A, but I've heard them getting ready next door. We'll have the reception right after this, so we invite you to head over for food, drink, conversation, and to find your speakers there and ask any questions. Also, tomorrow, much like today, we'll have ample break time. So feel free to find speakers then and ask questions as well. Lastly, like all of our sessions today, this will be recorded and posted on our YouTube, video, and website in the weeks following the meeting. We will make that announcement via social media and email to let you know when it's ready. So once again, thank you for joining us. Thank you to our speakers, and we will go ahead and get started. Good afternoon. It's really great to be here with you, and I'm going to talk fast. I'm going to base my talk on observations from the Designing Libraries for the 21st Century Book, published in fall 2022 by ACRL, co-edited by myself, Tom Hickerson, and Leonora Crema. I'll focus on three themes using some quotes from contributed chapters along with my own observations. Brian Matthews wrote that he had visited a newly renovated library and felt underwhelmed. Quote, the reason I had that response was due to my expectations. It was described as a transformation, yet it seemed more like a refurbishment. The same service design and structures were in place. Yes, it looked stylish and conceptually speaking, it operated in the same manner as it did decades prior. My disappointment had nothing to do with aesthetics. Rather, my reaction was to the unceased potential. This library had a chance to transform itself, to actualize a new guiding philosophy, and it missed that opportunity. Close quote. Brian focuses his analysis on staffing issues, which are very important. I believe that one of the reasons libraries fail to achieve their potential through renovations or new builds is that they do not look closely at the research, teaching, and learning trends in their institution and develop specific programming along with newly designed space to meet those institutional needs. Libraries don't just serve students generically studying. They serve undergraduates, graduate students, and faculty who engage with a wide range of information resources and produce all kinds of content in their disciplines. They did recognize this at University of Arizona where they developed Catalyst Studios. By the way, Arizona will host the next designing library's conference in fall 2023. As Sean Sutton writes in his chapter, the studios include a maker studio, a VR, a data, and a learning studios, and a common area. Quote, Catalyst Studios is founded on a commitment to formal and informal learning as essential parts of the UA student experience. In addition to working with faculty across disciplines to integrate the use of the studios into the curriculum, informal workshops such as GIS, the software carpentry design are offered as well as extensive drop-in hours for impromptu learning. Catalyst Studios' programming also reflects the recognition that computational forms of research are increasing across all disciplines at the U of A. Another important aspect of student experience is taken into account in the studio programs with its emphasis on diversity, equity, and inclusion. Mimi Calter, who's now Vice Provost and University Librarian at Washington University, was formerly heavily involved in renovation projects in her role as Deputy University Librarian at Stanford, where the David Rumsey Map Center is located. She writes that the program statement for the center stated that it will, quote, be a unique collections-based research center in the Stanford University Library for the use of cartographic information in all forms from paper to digital that enables and promotes interdisciplinary scholarship. She continues, here physical and virtual spaces will be co-located to facilitate and leverage geospatial research in ways never before conceived by combining rich and unique collections of physical maps and other cartographic artifacts with their digital derivatives in a one-of-a-kind technology-rich environment. The Rumsey Map Center is being designed from scratch as an incubator and accelerator for collaborative and interdisciplinary work that embraces the arts, humanities, sciences, and professional disciplines. It will be a unique resource for the Stanford community and the region and serve as a new model for the 21st century collections-based digital research and teaching, close quote. This project is an exemplar for imagining how physical spaces can promote synergies between analog and digital special collections. I hope many special collections projects will similarly think through such possibilities. In her chapter on staffing and organizations, Marianne Maverick, Dean Emerita at the University of Rochester writes, quote, at heart our spaces are only spaces without the programs, services, expertise, scholarly content, and technology that enable and infuse these spaces to drive the transformational experiences that we aspire to create for our students, faculty, and staff. To activate our spaces, our staff must be drivers unleashing their talent, creativity, and expertise to continually make improvements in anticipation of or in response to user needs and changes in the environment. Enabling and empowering staff to accomplish this requires progressive leadership that provides foundational elements and structures to support staff increasing organizational capacities for change and thereby transforming spaces. She continues, transforming organizations and transforming spaces are iterative, dynamic, and co-dependent processes, close quote. This really sums up one of the major themes of the Designing Libraries book. We don't yet know what the impact of the pandemic will be on library spaces. My perception at this point is that it will vary by type of institution and type of a programming offered by the library. I believe that in many institutions, library spaces, if configured with institutional goals and user needs in mind, will continue to play a central role in their community's academic lives. I hope that you'll explore the 28 chapters of the Designing Libraries book by a group of really stellar authors. It was important to us as co-editors to ensure that an open access version of the book is available. I hope that you'll find it inspiring for the next renovation in your library. Thank you. All right, hello, I'm Jessica Thiemann. I'm the Digital Preservation Librarian at the Government Publishing Office. I came to GPOs seven years ago, and since that time, I've been supporting the agency's strategic mission in maintaining an ISO 16363 certified digital repository. A little background on GPO. GPO was founded in 1861, and our current mission is to publish trusted information for the federal government to the American people. GPO is a producer and distributor of official publications and information products from all three branches of the US government. And we aim to support an informed nation that has convenient and reliable access to their government's information through our products and services. One of our products and services is Gov Info, an ISO 16363 certified trustworthy digital repository. Gov Info provides online access and two historical information from all three branches of the government. And it consists of a content management system, a preservation repository, and a online public website. The history of Gov Info sums back to 1993 legislation known as the GPO Access Act. At the time, that act mandated that GPO provide an electronic means of accessing government information. And that later evolved into a mandate that GPO provide an electronic storage facility. And since that time, that has now evolved in GPO's commitment to maintaining an ISO certified repository. Currently, GPO has about three million archival information publications that contain over 8.4 individual PDFs, over 18.9 million images, and audio files, spreadsheets, and several other file formats. The total capacity of all of our archived information is around 80 terabytes. And some of our most accessed content includes the federal register, the code of federal regulations, U.S. courts opinions, and congressional bills. Since 2018, our ISO 16363 certification or since 2018, GPO has maintained our ISO certification and we've been the only the second institution in the world to hold this certification and we're currently the only institution to be ISO certified. And we've benefited greatly as a result of attaining that certification and having third party official recognition of our digital repository as trustworthy has bolstered trust in GPO's capability to leverage current technology, effectively mitigate long-term risks, and operate a large scale program that meets needs and expectations of our designated community. For our stakeholders, ISO 16363 certification is the only form of formal repository certification that is considered to be fully transparent and removes auditor bias. And as a federal institution, it is particularly essential that any audit process is of the highest established credibility in order to maintain the integrity of that certification. Despite the benefits that GPO has specifically received as a federal institution for maintaining our certification, no other institutions have publicly announced that they're going to pursue ISO 16363. Within the digital preservation community, particularly when interfacing with members of the academic library community, GPO has observed that other professionals are hesitant to seek administrative buy in to pursue ISO 16363 without more transparency of the costs, both financial and in terms of human commitment in time. GPO has perceived that there's often a perception or an expectation that preparing for an audit could constitute the time of a full-time staff member. In addition to this, it is accurate to state that not all repositories deem it necessary to seek such an extensive audit process in order to be trusted by their designated community. It appears that good enough practices are still accepted when industry standard level actions are not feasible due to certain institutional constraints. It might also be challenging for institutions to define their designated community, and this is essential as a repository's efficacy is defined by its ability to meet that designated community's needs in order to provide evidence of their trustworthiness. In 2022, GPO identified Cortres Seal as a secondary form of assessment worthwhile to supplement our existing ISO certification. Cortres Seal is actually considered to be a core level of assessment, whereas the ISO certification is considered more of a formal form of assessment, so it might seem redundant or duplicative to do both, but GPO sees multiple benefits to achieving this dual model of assessment. For one, this ensures that GPO maintains at least one form of certification at any given time. In the event that ISO 16363 accredited bodies are unavailable or there's other unforeseeable factors that pose availability concerns for the performance of ISO 16363 audits. Secondly, this ensures that GPO's involvement in a professional community of over 100 international repositories that have already committed to this core level of assessment and have not yet committed to the formal level of assessment, and that does include 10 other federally operated digital repositories. This also provides potential opportunities for GPO to possibly serve on the Cortres Seal peer review board, and then we might be able to publish or present on the experience of attaining both forms of certification, and that might allow GPO to directly interact with other digital repositories and encourage the broader professional community to go ahead and pursue the more formal level of certification using GPO as a model of success and feasibility. So as of right now, GPO has submitted our application to Cortres Seal as of July, 2022. We have currently undergone a surveillance audit to maintain our ISO certification, and we anticipate to hold that certification well into 2023 and going forward. And we plan to continue engaging with the community on trustfully digital repository assessment while maintaining both forms of certification. Hi, I'm Martin, and I'd like to talk about COSI, the COSI project. COSI stands for Collaborative Software Archiving for Institutions, and with generous funding by the Sloan Foundation, this is a joint effort between New York University, Los Alamos National Labs for IM, the University of Pittsburgh, and Old Dominion University. So I think we can all agree that the Git and Git hosting platforms, such as GitHub, GitLab, BitBucket, you name it, have become increasingly popular, right? We're all using these platforms to share code, to collaborate on code, to version our code, and to share issues related to this code. So with that increase in popularity, we're asking ourselves, well, does this reflect it in our scholarly articles? And as a matter of fact, it is. So as shown in the center of this slide here, we looked at two different corpora, the physics preprint archive.org, and the PubMed Central corpus, and we do see indeed an increasing frequency of links to Git hosting platforms over time. Specifically, as indicated with the orange line there, this scenario is really dominated by GitHub, right? Everybody links to GitHub. Specifically, if you look at a fairly recent preprint from archive.org, you'll have a 20 to 25% chance to actually encounter a link to GitHub. Interestingly enough, from my point of view anyways, if you look at the PMC corpus, roughly 15 years or so ago, the source for which was the Lingua Franka where you put your code, and which link to, and roughly in 2016, and so GitHub took over and has been dominating ever since. So given this increase in popularity and giving this somewhat natural introduction into our scholarly literature, what does this mean for Git hosting repositories when it comes to preservation of the digital record? How do we enjoy access to these sort of things? What does it mean when we're talking about reproducibility of our scholarly record, right? Or maybe in other terms, what happens if Microsoft tomorrow decides to sunset GitHub? Or even worse, Elon Musk takes over, right? So what does this mean to our scholarship sort of thing? And this might be a gloomy sort of scenario, but it's not unprecedented, right? We do have Git hosting platforms that basically have disappeared, Google Code, Gatorios, these sort of things. So maybe there's an argument to be made, well, these portals maybe just websites, right? Or these platforms maybe have to be profitable to be around for a while. So I do agree absolutely that there are organizations and initiatives around that address the notion of software preservation and software archiving. However, I also argue that there is still a lot of work to be done, right? Take for example, the issue of scope. General web archiving initiatives are usually a best effort sort of approach, which means there's no guarantee that for example, all GitHub repositories of all NYU scholars are archived or archived at the desired time or at the desired frequency, right? So there's issues there. On top of that, some of these initiatives give you access to an archived record of the actual code, but don't give you access to an archived version of the issues around the code, right? So especially when we're talking about reproducibility, I would argue that this is a really key factor to have in your archival record. And then of course we have other issues that are more, let's say typical to web archiving, right? We have incomplete archival records as shown in the center of the slides, and you have something that we refer to as temporal inconsistencies or violations as shown on the right-hand side of the slide where the web archive tells me that this GitHub repository was archived on November 3rd this year. However, individual bits and pieces of this repository have been archived at different times. Specifically, the code left in the repository downloadable as zip file was archived in March of this year, individual files even earlier than that, the issues page was archived in August of this year, right? So if this repository was gone from the live web and we had to reconstruct it from web archive, we might find ourselves in a situation of reconstructing something that in this composition never existed on the live web, right? Surely an issue, I hope you agree. All right, so how is COSI different? What do we try to do with COSI? Well, we're building a institution-based workflow that you as an institution can adopt, can deploy locally with the objective and scope to create archival records of Git hosting platforms and repositories on those platforms that are of interest to you and in your scope, basically. At a very, very high level, this starts on the left-hand side of the slide with the selection policy, which is something that the crew at NYU has been working on in conjunction with the newly created role of a software curation specialist. This selection policy defines, you know, which Git hosting platforms are in scope, which accounts on those platforms are in scope, maybe even which repositories are in scope, right? But most importantly also which components of, let's say, GitHub repository are in scope. So do I want the code? Likely, right? Do I want the issues? Yeah, absolutely, right? Do I want the names, maybe even the GitHub identities of contributing entities, contributing developers to the code? Those sort of boundary decisions for a curatorial process basically are defined in the section policy. Then we're using the Arkham workflow engine from the University of Pittsburgh that has already been really, really good in interacting with Git hosting platforms and we modified it to an extent that it can trigger Memento Tracer, which is a somewhat novel web-archiving framework that we're developing at Los Alamos, that basically works like a remote-controlled car, right? It needs a set of instructions to know exactly how to interact with the website, so which links to click on, in what succession, where to scroll, which sort of components of an archival record to actually capture, right? So Memento Tracer creates a standard walk file which is then ingested back into Arkham and can be replayed to a user from there. It can also then be made available to a quality assurance process that our collaborators at Old Dominion are currently working on that assesses for everything we wanted to grab, right? From the section policy, did we really get it? Did we really capture it? And if not, what did we miss? And what's the damage to the thing that we just captured, right? Do we have to re-initiate our archival process or are we still okay? A non-trivial question to answer. And then down the road for, let's say, preservation purposes, post-processing, and so on and so forth, we'll ingest the walk file into our institutional repository, and since we're deploying this pipeline currently at NYU, NYU uses InvinioRAM, that is our lingua franca. However, this could be your dSpace, Fedora, you name it, sort of first repository to ingest your walk file. So if any of this resonates to you, if you're in a position to be interested, let's say, in archiving your scholars and your staffs, get hosting platforms and their traces there, please come and find us and talk about this more. We're happy to collaborate and experiment more on this. Thank you. All right, hello. I am Tina Beach, and I'm here in my capacity as Sparks Visiting Program Officer for the U.S. Repository Network, and I'm gonna give you a little bit of background leading up to the content on my first of three slides. So CORE, the Confederation of Open Access Repositories, launched the Modernizing the Global Repository Network Initiative in July of 2021. In surveying their U.S. membership for the launch, CORE identified a clear priority for the U.S., breaking down institutional silos and developing a more cohesive approach and greater collaboration around repositories. Sparks then partnered with CORE to start the U.S. Repository Network Initiative in September of 2021 with the goal of addressing these issues, and I've been working on that extensively since then. The USRN, as I will abbreviate, U.S. Repository Network is envisioned as an inclusive community committed to advancing repositories in the U.S., and in this context, U.S. Repositories really refers to all open research repositories based in the U.S., regardless of content, host, or platform. That is, repositories containing articles, data, gray lit, emerging forms of scholarship, repositories hosted by higher education institutions, research centers, or other nonprofit organizations, and repositories using open source or vended platforms are considered to be part of the network. All such repositories are welcome to participate in USRN as we seek to build value for all repositories in the U.S., regardless of their personal level of participation. So the first task in catalyzing this U.S. Repository Network was to develop a strategic vision for U.S. Repositories through broad community consultation. So we put together a 63-member expert group, some of which I'm sure are here in the audience, including the speaker just before me, Martin. The expert group included U.S. core members, Spark Steering Committee members, library leaders, repository managers, consortium leaders, and was also representative of institutions across a range of enrollment sizes and distributed regionally throughout the U.S. So thank you to everyone here who participated in that part of the process. As I said, I know some of you are here today. The expert group responded to a survey and participated in an ideation session to aid us in drafting a strategic vision for U.S. Repositories, as did community calls with COPE, the Coalition of Open Access Policy Institutions, and OpenCon Librarians. The vision was reviewed by the expert group and then opened for public comment for about a month. And all of that finally led to the strategic vision that you see on the screen here, that community-driven strategic vision. So following that, we assembled a smaller steering group, sort of a nodule steering group to assist us in developing an action plan. And this steering group is led by Vicki Coleman of North Carolina A&T State University and Martha Whitehead of Harvard University as co-chairs. And I know Vicki is at CNI, and I know some of our other steering group members are here, so maybe give a little wave if you're in the audience. Myself, there's Vicki back there. Hi, Vicki. So thank you to everybody who agreed to be part of that steering group and to Vicki and Martha for helping to lead it. Through that process, we identified three areas of action, which you see on the screen, engaging with OSTP and federal funding agencies on implementation of public access guidance, being the first. The release of the August 2022 Nelson Memo that we've already heard a lot about here today, offers an immediate opportunity to engage with OSTP and federal funding agencies and to advocate on behalf of the U.S. Repository Network. That entry point for this advocacy will be the development of, as is mentioned, in the Memo, desirable characteristics of and best practices for sharing in online digital publication repositories. These characteristics are expected to be analogous to the existing desirable characteristics of data repositories for federally funded research. And the USRN is seeking to work with OSTP to define an appropriate set of desirable characteristics and best practices for repositories that support their public access guidance and also allow many of your repositories to be included among agency-designated repositories to fulfill compliance for your researchers. At the same time, USRN will seek to raise awareness of the benefits of a distributed network based at universities and research centers in support of compliance, as I just mentioned. So the second piece is develop a network governance model ensuring the ongoing sustainability of the US Repository Network as a community-driven initiative is essential. I would hate to see all the hard work that's gone into this already go to waste. So we wanna ensure that we have a sustainability model and that requires the creation of a community governance structure for the network. So we'll be working on that in the spring. And then finally, building community and external relationships. And success in this work also requires community invested in actively moving the strategic vision for US repositories forward. And the USRN will work to identify and build relationships with potential partners in addition to increasing awareness of and participation in the network by the US Repository Community. And finally, if you'd like to get updates, including ways to get involved, you can take a picture of that QR code and it'll take you to a form where you can sign up to receive those updates and ways to get involved. And then there's a link to our more information on this park website. And I'm Tina Beach. If you wanna reach out to me directly, my email is tina at sparkopen.org. Thank you so much. Hello everyone, good afternoon. I'm Simeon Warner from Cornell University Library and I'm delighted to have a few minutes to talk to you about OCFL, the Oxford Common File Layout, a Storage Foundation for Digital Preservation. I'd like to start by emphasizing that we can successfully ensure access to digital content over time only when technical approaches are combined with strategies, policies, and actions. OCFL is designed to be one component of a technical approach. Digital objects and their storage are essential components of all digital preservation models and the technical choices in these components can be more or less aligned with preservation needs. OCFL has been designed with this alignment in mind. But first, a couple of background observations. Your data will last much longer than your repository software, I hope. Also, one of the greatest risks to data is either the selection, arrangement, organization, or reorganization of that content. We thus want a long-lasting storage solution and I think many benefits accrue from a shared approach. Perhaps the most obvious to our being counting minds is that we might have some shared effort in developing solutions around a shared approach and reduce implementation costs. But perhaps more important in this arena where bugs might mean data loss is that minds working together produce better solutions, a safer product, and we are less susceptible to blind spots. So, about OCFL. OCFL is comprised of two parts. The first is a specification for the arranging of files or data streams of a digital object. Each object has an identifier, one or more versions, fix the information, and some administrative metadata. The second is a specification for how digital objects may be arranged on storage. Both parts are designed to align with digital preservation needs and I will highlight a few aspects. I'm gonna say it again. Prepare for the future. Your data will outlast your software, so a key aspect of OCFL is the separation of data from any particular software implementation. We continue to be in a period of rapid change in storage infrastructures. OCFL is based on a simple file system metaphor that allows us to map storage to disc and tape systems and cloud object stores. OCFL objects can be readily replicated across any combination of storage types. This might be done natively within the repository, separately from the management application, or even with simple file system tools. Strong fixity is built into OCFL, allowing for monitoring of corruption or even potential malicious action. Efficient versioning for object updates is perhaps the most important benefit of OCFL over other solutions. In OCFL, versions are immutable, but only new or changed files are recorded in a new object version. Thus, unlike systems that store a complete copy of an object for every version, say maybe you have a baguette file for each version, with OCFL, a change in a small metadata file doesn't mean the replication of a large video file associated with it. Lastly, OCFL is designed to be simple, survivable, and is built on a few open specifications. Patrick Hoxenbach's wonderful hand-holding hard drive is perhaps extreme, but I think you get the message. So where are we with OCFL's development and implementation? We gave a project briefing at CNI in 2019, which feels a million years ago now. OCFL version 1.1 was released in October of this year. It includes minor revisions over the two-year stable version one, based on community implementation experience and feedback, and includes details of how object versions may follow different specification versions, allowing evolution of the specification. We also have eight community extensions that have been agreed over the last couple of years. And we're now within the community discussing what OCFL version 2 might add. Things under consideration include how OCFL might support distributed objects, how to handle branched versioning models, and how to handle cases where files must be entirely deleted, perhaps, for legal reasons. Obviously, deletion is at odds with the idea of stricter mutability, so there may need to be some options with an implementation to specify either stricter mutability or the ability to make deletions. There's been considerable OCFL implementation work. Fedora 6 was released in summer 2021 with OCFL as the storage substrate. It also provides a Java library that can be used elsewhere. The NVIDIA RDM platform that Martin mentioned is moving toward using OCFL as its underlying storage layer, and then there's also a Python library as a result of that that could be used elsewhere. There are also a number of local implementations that I've listed here. Perhaps particular note to the implementation listed at the bottom. Andrew Woods talk tomorrow at 1 p.m., a new storage paradigm for sustainable digital stewardship where Andrew will discuss work to migrate Harvard's large-scale digital archive forward to new technologies which include OCFL as the storage foundation. Finally, I'd like to thank everyone who's contributed to the OCFL work in the community and also to my co-editors, both Andrew Woods and Rosalind Metz are here at the meeting today, and also to Neil Jeffries and Jonathan Morley, as Julian Morley, who are not. Thank you very much. Good afternoon. I'm Saith Scherder with Carnegie Mellon University Libraries. I'm here to talk about university-based open-source programs, offices, or OSPOS. Even though I have seven minutes, I thought deliberate provocation would be the best way to make my talk memorable. So here's the provocation. I believe curating open-source software represents the last best opportunity to convince researchers about choosing university-based infrastructure for open scholarship. The reality is that researchers often choose federal agency or private company options rather than university or library-based services. The recent Nelson OSTP memo emphasizes agency-designated repositories by default, which could have profound implications for how researchers perceive university-based infrastructure. I want to be clear that there is value in agency-based repositories. If you're interested in how libraries and publishers might work with such existing agency-designated repositories, then please consider attending the session tomorrow at 11.15 about the public access submission system. The slide here shows three outputs from academic research moving from left to right, articles, data, and software. In a fundamental sense, the 2013 Holdren OSTP memo and the more recent 2022 Nelson OSTP memo represent policies that have also flowed from left to right in this diagram. And yet software remains unaddressed. Even though there is interest from OSTP and from federal funders to develop guidance or policies related to open source software. Open source programs, offices, or OSTP's are community conveners in centers of competency that help curate, share, and maintain software. OSTP's can focus on the arrows on the right side of this diagram which represent translation or impact of research and learning beyond the walls of the university. It was roughly a decade ago that we saw the rise of library-based data management services which have had some impact, but I think perhaps also reflects some lost opportunities. We have a similar but more compelling opportunity now with OSPOS, especially given the importance of software reproducibility of research and progressing cybersecurity concerns. With both the OSTP memos, our community has been reactive. We have an opportunity to be proactive and inform the OSTP and federal funders about policy and guidance regarding open source software. It's also worth noting that researchers are more willing to ask for help with software than they are with either articles or data. So the demand is present and the federal government is seeking constructive advice and guidance so we would be wise to help them and build collective supply and capacity. One key differentiator and advantage of open source software is compared to open data is the existence of a canonical set of licenses managed by the open source initiative. These licenses codify IP issues, risk, legal compliance, so on for open source software. However, much of the open source initiatives work has been by the private sector for the private sector. If we are to build capacity for supporting open source software, the academic community needs to examine these licenses with a balance between academic freedom, reproducibility, open scholarship, and risk management. The best way to move forward is through a network of OSPOs starting within individual universities and then organizing it to networks such as OSPO++ or something similar. OSPO++ is a community that is working to adapt and augment the corporate OSPO model which has existed for years into the university context and mission. Through OSPO++ we are developing a playbook for building university-based OSPOs. While this playbook incorporates lessons learned from an initial group of university OSPOs, there is more work to do. So if you'd like to learn more or would like to be involved, how about we do that later? Okay, let me just finish and we hope we get this slide back up. I'll try to think this personally. If you'd like to learn more or would like to be involved, please contact me with any of the three options listed on the slide or you can see me out at the reception or anytime during the conference. I wanna thank the Sloan Foundation for their generous funding of the Cunningham Ellen OSPO and also for OSPO++. I will end by noting all of this work is being conducted in partnership with Helios and I believe you hear next from Alicia with a relevant update and hopefully her slides. Thank you. Am I safe? Am I safe? All right, dodged a bullet there. Thank you, Said. All right, thank you everyone. I'm Alicia. I am presenting on behalf of my colleague, Caitlin Carter of the Open Research Funders Group, here to brief you on the Higher Education Leadership Initiative for Open Scholarship known as Helios for short and more specifically a project within it dedicated to advancing shared infrastructure to support open scholarship for which I serve as co-lead. Helios emerges from the work of the National Academies of Sciences, Engineering and Medicines round table on aligning incentives for open scholarship. This multi-year project brings together key interested parties including senior leadership at universities, federal agencies, philanthropies, international bodies and other strategic organizations to rethink research evaluation to better incentivize and reward openness and transparency. Helios features three core components represented by the three big bubbles you can see there is currently comprised of 88 research institutions making up a national community of practice. Each institution's president has designated one representative to Helios and that represents the presidential commitment. Member representatives at this time include a broad mix of senior campus leaders including CIOs, presidents, provosts, VPs for research, university librarians like myself and others. That presidential commitment to open scholarship and to the Helios Initiative is a unique and important component which empowers institutional representatives with the authority to speak on behalf of their campuses and to engage campus level action around Helios work. That campus engagement is the third component. Helios subscribes to a theory of change that says that the way we move towards achieving the promises of open scholarship is through mutually reinforcing vectors. So the university leaders, academic departments, funding agencies, scholarly societies and so on that you see in the circle there are all giving signals to the researchers who produce the work and those signals as the theory goes should all be in alignment with each other and should consistently point towards open. The sectors need to work together to foster communication and action to get there. So to that end Helios facilitates and convenes interaction among these sectors to help pursue and strengthen those alignments. Now our community of practice has formed four working groups that are each trying to tackle different areas of focus including a working group focused on the language of tenure and promotion review, another group focused on good practices for doing open scholarship, another on identifying points of cross sector alignment and the one I'm involved with advancing shared open infrastructure. Now many of the project briefings at CNI and several of the excellent lightning talks that we just heard have something to do with scholarly communications infrastructure. We all know how important it is and we all know that there are gaps in the landscape. Our working group has recognized that research communications infrastructure, the data storage, the protocols, the software interfaces, these are the critically enabling mechanics that are needed to make open scholarship happen regardless of any policies or incentives that may exist. And among other challenges we've recognized that one of the hurdles to advancing, particularly non-profit or academy owned infrastructure is getting the campus level decision makers aligned with each other on a strategy and ready to make key investments that cut across their portfolios. And so that's why our group is currently working to develop and release a concise guide to investing in research communications infrastructure which is due out in early 2023. Decision makers across campuses, those with budget authority like CIOs, VPs for research, university librarians and more, all might have some infrastructure responsibility within their portfolios and with limited resources, they need to be able to get their unique perspectives together and develop an informed and unified campus level strategy. Our guide will outline these key issues in a succinct way for busy executives, issues around governance, long-term versus short-term economic outlooks, data portability, security, interoperability and other considerations that may ultimately have an impact on how new knowledge is disseminated. After that, our working group will begin looking at further concrete steps we can take to advance existing or develop new effective and sustainable infrastructure options. In particular, we know that our infrastructure has to be accessible to researchers everywhere beyond just well-resourced research institutions and beyond just federally funded scholars to really achieve the promises of open scholarship. Our working group leads convened for a multi-sector summit last Monday here in DC, just down the road at the National Academy of Sciences to share our work in progress and to get feedback. We had funding agency heads in the room, we had the leaders of the AAU and the APLU, we had society presidents, we had the leaders from government agencies such as the NSF and the NIH, among others. And through that conversation, we identified some opportunities. For example, to collaborate to improve and enhance federal agency policy in support of open scholarship or possibly to engage in further conversation about how organizations like the AAU measure and reward research impact. We understand, for example, that it's currently weighted towards published papers and could be thinking or could be doing more to recognize the impact of data sharing. We also identified shared interest across sectors in thinking more deeply together about how the federal government could or should provision essential public interest infrastructure components for public use. These conversations will continue into 2023. I'm very excited for the potential that I saw in these conversations for these sectors to work together in mutually reinforcing ways to really build and support a robust and lasting system of knowledge infrastructure that will enhance the competitiveness of our institutions, accelerate human problem solving at scale, unleash the potential of computational approaches to inquiry in areas like health and climb and ensure that the knowledge and data we produce today remains available and comprehensible to scholars, generations from now. If you want to learn more or join us, you can find me at the reception right after these talks or you can reach out to Caitlin or myself using this contact information. Thank you. All right, last guy before you'll get to shrimp cocktail, so I'll try to make it quick. All right, my name is James English and it's my pleasure to be here with Michelle Kimpton who leads the Palace Project Division at Lyrisis. For those of you who do not know what the Palace Project is, it's simply an e-book application that provides greater access to public library patrons for e-books and different audio books and materials that the library may purchase from a variety of vendors. What I'm really excited to talk to you about is that we are trying to introduce this to academic libraries in partnership with a couple of universities that we've been working with over the last couple of years to try this technology out and develop it so it can bring greater access to some of these problems and challenges that we see in academic e-book access. As you all know and probably experience, you have fragmented collections. Unlike public libraries, you just can't pick one aggregator to be your e-book service. You may have to use several of them. There are major publishers, just like in the public library space that account for the majority of the content you buy, but as academic libraries, you can't just restrict yourself to a few. You want to acquire collections from a broader number of digital sources, such as your institutional presses, your institutional repositories, global and international repositories, and the hundreds of presses, university presses that may not be your own. A lot of this creates another challenge that academic user experience in trying to navigate from your catalog to the actual piece of content, all the metadata and the linkages in between creates some confusion with your users, especially first-time users. In fact, a generation of users that grew up on mobile. You have over 16 million users in between 18 and 29 attending universities in the U.S., 96% of them own a mobile device. Between them, when you survey, it's about 50-50 in terms of who owns a mobile device versus a PC. Mobile internet traffic is over 50%. So it's greater than that of desktop. Mobile app traffic is 90% of that traffic. So where your users are are online, on mobile apps, in a mobile app, not in a mobile web browser or responsive web browser. It's not where your users are. And there are zero academic platforms out there for mobile access to content. Those are some of the challenges we have to deal with at libraries, but there is a solution. There's one that was created in the public libraries and has been growing like gangbusters in the public libraries. We're in over 11 states, eight of which are providing the platform statewide. We are looking to provide this for academics to be able to let you to connect your ebook, content, collections, and repositories, as well as metadata through this integration layer, and to deliver that to where your users are on mobile devices. Is this supposed to be your one and only ebook solution? No, it's a secondary solution. But what is unique about it is that it does bring the content to your users, whether they be on campus or off campus, remote learners, or just someone who can't afford a PC and all they have to do their scholarship is a mobile device. One of the nice benefits of it, it uses your institutional login. You don't have to have your users create other identities on other platforms from the myriad of solutions that you may acquire content from. It has a direct catalog integration for your content to the content that they have rights to access. It has a built-in EPUB reader, PDF reader, and audio book player. And this unified catalog and bookshelf for that user allows them to go to one place to be able to quickly download and find their materials and advance either their research or continue their scholarship or part of their coursework at your university. So, why are we here? Well, we're here to just let you know that the Palace Project is coming to academia and we're looking for publisher partnerships and aggregator partnerships, whether you're a for-profit or a non-profit, providing content to university libraries and to join our existing partnerships with folks like Columbia University, NYU, and now the University of California to help us better understand the needs of academia and how we need to improve the platform and how we can grow and make this technology more available to others. So, our contacts are up there, Michelle Kempton and myself, and you can find out more about the Palace Project online at thepalestproject.org and we had our first press release out today for academic libraries, so it's really exciting to be here and I really look forward to talking to many of you out there that share some of the challenges, the desire to meet those challenges and explore the Palace Project for your academic library at your university. So, reach out to us and what's next? There's shrimp cocktails and a great reception and I think I did that really quick. Okay, I'm gonna bring this slide up because it was a three-slide limit, so every slide counts and I wanted to make sure that got some air time. And I broke my own rule and I applauded between each of the speakers because they did a wonderful job providing so much information with a three-slide limit, seven-minute limit and the ominous threat of my phone alarm staring at them from the front row. So, please join me in giving them another round of applause.