 Good afternoon, everyone. Welcome. Thanks for coming to our session. My name is Chris Berg. I'm the Director of Libraries at MIT, and I'm thrilled to be able to present to you with two of the amazing members of my awesome leadership team. So with me today is Heather Sardis, who's our Assistant Associate Director for Technology and Strategic Planning, and Erin Stahlberg, who is the Associate Director for Collections and Faculty Relations Strategy. And we're going to talk to you today about a project that we've embarked on to write about and speculate about generative AI and research integrity. So we have a relatively new president, and when she was inaugurated in January of this year, one of the themes of her inaugural speech was in fact generative AI, and she sort of laid out a path for MIT saying that MIT has to be a place where we help society come to grips with the tectonic forces of artificial intelligence containing its risk and harnessing its power for good. And when she talked about that she talked about the fact that while there's lots of expertise and lots of good work that's going to happen to help society come to grips with generative AI and develop tools and develop policies and norms and ways of managing and mitigating risk, lots of places are going to do that, but it really is kind of a sweet spot for MIT. And in fact, there is a recognition, and we try to say this in as humble a way as possible, but we recognize that the world is kind of looking towards MIT to see what MIT is going to say about things like generative AI and how the world is going to grapple with it and its implications. And so to try and sort of pull that together to create some organization around all of the various strands of work and thinking about generative AI that are happening across MIT from the philosophy folks to obviously our computer science and engineering folks and everywhere in between to pull all of that together MIT's leadership put out a call for proposals across the entire institute asking for proposals for white papers that would inform the public discourse and sort of lay out frameworks and road maps, not solving the problems, but sort of laying out what are the road maps, what are the things we need to be thinking about, what are some policy recommendations that we need to consider, what's some calls to action that might be appropriate at this moment in time as generative AI starts to explode and proliferate. So that was a call for proposals and there was money attached and any principal investigator or faculty member across the institute could submit a proposal. So we decided in the libraries that we wanted to submit a proposal. And we decided that we would focus our proposal on research integrity. And I think most of us know that this is a moment in time, actually it's been a long few moments in time where public trust in academia is waning and trust in science and in the truth is declining. And in fact, there have been a number of recent high profile instances of research misconduct. So thinking about how does the proliferation of generative AI, how do those tools intersect with these issues around research integrity. That's where we latched onto and we felt that it was really important for us to develop a proposal. Lots of folks are thinking about what are the risk, what are the possible harms that generative AI that we need to look out for. We need to look out for bias, we need to look out for lack of attribution, lack of transparency. Lots of people are thinking about that and that is absolutely necessary work. And we thought what if our sweet spot would be to speculate on ways that we could leverage the power of generative AI in ways that would enhance rather than erode public trust in science, academia and research. It was a tall order that we set for ourselves but we thought it would be an interesting one. And again, we felt that research integrity as a concept was a really important place for us to explore. Obviously, we feel that there's libraries are a place where we cared very deeply about the integrity of the research and the information that we have that's necessary for decision making. Obviously, advancing technology and innovation, there's just lots of good reasons for why research integrity matters. As a library, we're also super invested in encountering disinformation but again, very much around that nurturing and propping up public trust and trying to regain some of that public trust and support for academia and for research. So, why the MIT libraries? We had a speaker not long ago, Richard, some of you know, Richard Ovenden, the Bodleian's librarian, came to MIT libraries to talk about freedom of expression but this quote from that talk really sort of resonated with this project as well. Libraries and archives are institutions that help society cling to the truth or at least get closer to truth. So that's sort of a place where thinking about how the forces of generative AI where again there's lots of worries about a lack of truth and lack of transparency and deep fakes and so forth, but are there ways that that could be leveraged to help the central mission of libraries in propping up truth? So we wanted to go there and we thought that the libraries were the place to do that. Also at the MIT libraries four or five years ago we launched a research center within the libraries, a center for research on equitable and open scholarship. So we already sort of had a home for that within the libraries to start doing some research and start to sort of apply those that convening power and that research power that we had within the libraries to this set of topics. Another reason why the libraries of all the places at MIT to talk about research integrity and how it intersects with generative AI, why the libraries while in large part because we are deeply invested in supporting and strengthening the entire research enterprise. Also because the libraries are the unit on campus that really thinks critically and has expertise in the entire research life cycle. So in any given department or any given faculty member knows a slice of it, but the libraries have a responsibility for having some expertise across the entire research life cycle. And again we already have significant expertise in doing research on how scholars create and disseminate knowledge and at the MIT libraries we have a reputation I think a fairly well earned one as being leaders in open and equitable scholarship which we think is really important around this concept of generative AI and research integrity. And it takes a team right so it's not the three of us are presenting today but the team that's been working on this white paper that we're developing includes Michael Altman who is part of CREOS, the Center for Research in Equitable and Open Scholarship myself, Sue Kriegsman who is the Deputy Director of CREOS Nick Lindsey from the MIT Press and then Heather and Erin as well. So we've got a big team, lots of smart people and we've actually gotten some input from others on the first draft of our paper. Alright, so that's sort of where we're coming from, the background of sort of how we decided that this small group of us at MIT libraries are going to start really digging into the intersections between generative AI and research integrity and do some speculating on how generative AI could be leveraged for good in helping to increase integrity of research. So what we're going to do today, we'll share our progress in sort of what we found in reviewing emerging and potential roles of generative AI in policy and as part of the scientific information infrastructure. We will talk a little bit about some specific applications of generative AI that we're speculating might have the greatest potential for promoting impact and openness and equity and trust in science. And we'll also identify like what needs to happen in terms of research to ensure that any applications of AI to try and bolster research integrity would align with the values that underline good science and core values of libraries. And so with that I'm going to turn it over to my colleague Heather. So I should note that all of the images you see in the presentation we generated using AI, we used mid-journey. So we put in different prompts relating to themes of libraries, information science in the talk, and these would be creative results that we got back. So generative AI has already shown its ability to offer creative partnership to human thinkers, artists, scientists, librarians, and more. It feeds human creativity, it can rapidly diversify a set of potential solutions to a problem, and it can take up the baseline elements of the creative process enabling humans to focus our creativity and intellect on the most critical or impactful aspects of a given task. Applying generative AI to research holds that same creative potential provided it's grounded in the same values that inform research integrity. So research integrity can refer narrowly to prohibitions against misconduct such as fabrication falsification plagiarism. It can also refer to the active responsibility to conduct research and communicate results with honesty, transparency, and objectivity. Or it can refer to a broader commitment to ethical principles such as respect and accountability. Likewise research integrity is aimed at ensuring that the research being produced is both replicable and reproducible through data openness and accessibility as well as thorough descriptions of research methodologies. In adhering to these elements of research integrity, researchers ensure that they are contributing new generalizable knowledge and aligning the processes and outputs of science with the governing core values of science. So informing research integrity are the values that underpin the enterprise of research. Values related to the inputs to scholarly communication including respect for intellectual property, respect for subjects agency like consent, data privacy, respect for persons, information agency could be values related to the content of scientific communication such as factuality, honest uncertainty, citation, intellectual attribution, etc. It could be values related to participation in science including opportunities for inclusion and equitable distribution of effort and responsibility. Or it could be values related to the systems and processes of scholarly communication including transparency, openness, trustworthiness, equity, durability, societal value, and environmental sustainability. So now we'll explore some of the opportunities and challenges that we perceive in the practical application of generative AI to research integrity driven by the values that underpin research. So we see opportunities for generative AI to reshape peer review at its current stage of development generative AI has shown promise in summarizing and evaluating documents. The applications of gen AI to peer review include the bullet points that you see here, reducing the impact of existing reviewer biases, reducing burdens on reviewers during the process of review, generally making the process more consistent and efficient, accelerating the publication process through these efficiencies and providing a framework for open documentation, measurement, and evaluation of the peer review system. The ways that we see generative AI having these impacts could be by identifying potential peer reviewers through analysis of the relevant published literature, providing reviewers with a summary of related literature, providing reviewers with an abstract of the submission's literature review methodology and results, and validating the article bibliographies, protocols, replication requirements, etc. So basically using those narrative strengths that generative AI has shown, but applying them to the different aspects of the publishing process. We also see opportunities for applications of generative AI to substantially expand the availability of open scholarly data by enabling better implementation and enforcement of existing data sharing policies and improving the discovery and documentation of existing research data. At its current stage of development, applications of generative AI to annotate and validate data are emerging, including several research teams at MIT who are applying AI tools to the work of repairing legacy metadata, fixing problems with older OCR documents, creating new indices from the implicit information in text. Ultimately we can envision generative AI enhancing existing data sets with automatically generated documentation and metadata, automating the process of checking submissions against journal data replication policies and improving the interfaces to data discovery systems as well as the relevance of the results that they produce. There's huge potential for generative AI to solve problems of open data availability and accessibility, improve the reproducibility and reliability of scientific results, and to lead to the generation of new discoveries through promoting data reuse. In turn, greater availability of open data can then benefit generative AI development and applications by supporting new research and preventing model degradation. We also believe that it's both critical and possible for generative AI applications to increase meaningful access to scientific publications. Through its capacity for summarization, annotation, and authoring of non-technical documents, generative AI could broaden the accessibility of scholarly publications by translating English language publications into a broader array of languages, augmenting publications with structured annotations to communicate the organization of the article, describing specialized content such as figures, tables, and equations for print disabled readers, and generating plain language summaries of scientific findings for non-technical audiences. Finally, perhaps most crucially for the future of human knowledge and scientific discovery, we believe that generative AI has the potential to increase the trustworthiness and integrity of science by reducing barriers to the participation in science of scientists from countries with economies in transition and from developing economies. Applications of generative AI to broaden participation in scholarly publishing include accurate and timely translation of manuscripts into English for review, adapting AI authoring tools to the needs of English language learning authors, adapting AI authoring tools for scientific writing, and developing AI tools to facilitate the peer review process for English language learning writers. The successful application of Gen AI to facilitate the standards of scientific integrity and enable a broadening of participation in scientific communication has the potential to accelerate the impacts of science generally and increase diversity in scientific fields. So now I'll hand it over to Erin to speak about the open research opportunities in scholarly communication. Awesome, thank you Heather. So this next slide is the same slide that Heather showed earlier and I'm just going to call back to the goals here to underpin generative AI with the values that underpin the research enterprise. The main piece of the work that's going into the paper is to identify research questions or sort of effectively a road not because that's what we were asked to do, a road map or an impact paper that if we could successfully align generative AI with these research questions and use generative AI to forward the goals that underpin the research enterprise, then we could carry out the research enterprise in the ways that increase public trust in science and increase research integrity. And so many of the topics that you'll see throughout the next slide were referred to conveniently earlier today by Cliff, so you'll see a lot there that resonates. And what we're really trying to do is help society figure out how we can actually use generative AI to move the initiatives forward that Heather described in alignment with those values that we all aspire to in the research enterprise. So the first one I'm going to talk about is scholarly communication and gen AI outputs. In general, the absence of systematic quality evaluation is the rule, not the exception for scholarly communications processes, which I'm sure many of you know and have experienced. The absence of quality measurements is a barrier both to tuning AI models to be used in scholarly communication and to evaluating the interventions that AI produces. The design of appropriate outcome measures for scholarly communications interventions and the observational and experimental methods for evaluating intervention is a critical open research question, both for enabling trustworthy scholarship with gen AI and for systematically improving scholarship at large. So the first one here is designing foundational models, foundation models so that they are reliably verifiable and transparent to their level of uncertainty. There are lots of researchers already working in this area. Many folks are trying to figure out how to solve these problems and so we're echoing the need for research in that area to be able to move these efforts forward. Developing standards and test methods, corpuses and auxiliary tools that researchers in the public could use to evaluate the quality of algorithm outputs in the various context and use cases. So users at the end can be able to tell what the success of their algorithmic responses are. And then developing new paradigms for peer review to connect to what Heather said earlier, how can generative AI help us summarize or put together maybe disagreement amongst peer reviewers and present them in the record as part of the publication stream. Our second set of questions is around scholarly communication and gen AI inputs. So gen AI models do not inherently protect the anonymity of individual data subjects and effective anonymization in gen AI can be achievable through known cryptographic approaches when that protection is in that model by design at the training stage of the model production. And gen AI models also do not inherently ensure alignment with laws and regulations that govern data about individuals including restrictions on publishing identifiable information, rights of correction and deletion, and limitations on the purpose in which the data can be used. So research is needed into efficient approaches to privacy preserving training of gen AI models and into addressing personal information in the training of gen AI. Mechanisms to limit memorization, track provenance, support attribution, and align machine learning outputs with the specific requirements of copyright and licenses are also an open area of theory and application. And they will continue to be as AI and the regulatory frameworks that apply to AI continue to evolve over time. There are a couple of slides on AI governance which should not be a surprise to you all and was also touched on pretty heavily by Cliff this morning, gratefully. Without governance, neither information markets or the scholarly ecosystem will be able to understand the general functioning of the scholarly ecosystem as it evolves will require both basic and applied legal information science, economic policy, and social science research. Understanding and governing the scholarly ecosystem will also require self-regulation as we all know industry incentives are misaligned. So understanding the general functioning of the scholarly ecosystem as it evolves will require both basic and applied legal information, science, and the scholarly ecosystem will also require systemic measurement collection and sharing of data that measures the behavior and the performance of the scholarly ecosystem and the results of the interventions in it. Well, these challenges are not likely to be addressed directly by gen AI applications. Gen AI enhanced tools could make it easier to effectively collect and share information about the scholarly ecosystem if those tools are designed to be open and auditable. And so we've on this slide have a number of questions again in a road map we feel like the need to see further research in. How gen AI is, it could and ought to affect the health and the operation of the scholarly knowledge ecosystem. How does gen AI affect the durability and the sustainability of the ecosystem? How does gen AI affect the norms and incentives for participating in science as Heather was talking about? And how does gen AI affect true participates in science and how the burdens of participation are distributed? Also on the topic of AI governance, it is an enclosure durability and the sustainability of the knowledge commons. It is not, for example, difficult to imagine that a large commercial publishers which currently control the largest databases of volunteer generated peer review could use those corpuses to train a peer review service that they would then commercialize. So clearly we need to do more work into designing systems and approaches that yield a healthy knowledge commons and also to define what we mean when we say a healthy knowledge commons. And as AI tools become increasingly integrated into the dissemination and the interpretation of the scholarly record, new methods and institutions of digital preservation will need to be developed. By changing the costs and the effort required in different scholarly activities, gen AI may have spillover effects on cultural norms and practices within academia that seems actually quite likely. For example, the valuation of external peer review could be challenged if peer review becomes capital intensive. The use of generative AI may have asymmetric and discontinuous effects on attribution and ownership which could exacerbate achievement gaps or could enable human actors to shift away from the responsibility for their bad actions. So we're looking to research to design norms and practices for excellence in human AI hybrid human AI scholarship. I can read what it says on the screen. We also discussed in the earlier sections about how systems of publication and peer review exhibit bias and create barriers to broad participation in science and suggest some ways in which gen AI may be used to mitigate those problems. The broad use of gen AI by changing the cost incentives and norms related to scholarly activities has the potential to change participation and inclusion in unexpected ways that are difficult to anticipate. So approaches to theorizing, measuring, and engineering participation in science are an active area of research. And so we're hoping with this paper that we're coming out with an actionable strategy with a roadmap for addressing the most promising opportunities to increase the trustworthiness in these products and the process of research. So we are very much hoping that gen AI will serve us well in being able to mine research data on scholarly output at massive scale to surface new insights and formulate hypotheses. We see a lot of potential and benefit there and the question is how to do this in a way that actually achieves those goals. As Heather discussed, the ability to translate academic language of research articles into a wide accessible array of languages, formats, and reading levels. And to advance research in science through those efforts. We're hoping that improvements in the quality and availability of research data both to increase research integrity and to increase the pool of viable training data for AI. So we're hoping again as Chris said, we are big advocates of open. We've been in big advocates of open for a very long time. We want to increase the pool of training data with scholarly information and accelerating the open availability of scholarly literature and data to increase the quality of the data being used in those gen AI tools and thus increase the quality of the output of those gen AI tools. So our next steps, the papers do on Friday. So wish us luck. We have only a little bit of work to do when we get home from CNI. So we are going to go home and work with the rest of the team to finish that off. We intend to use the funding that we were awarded by the institute to host an additional workshop in the summer of 2023. If you have suggestions or of yourselves or other people who think they should be involved in those workshops we are happy to take names. It won't be huge but we're happy to put names on a list. As Chris said there was funding that came along with this call which was excellent to which we are using that and also Chris forgot to mention saying that we were one of about 27 proposals out of about 200 that got accepted in this first call. They're going to do another round. But we were really pleased to be selected and we're hoping that the summer workshop will actually jump start a lot of the research that we've outlined in this paper needs to be done. For more information there is the link there to the Creos website where you can learn all about the center of research on equitable and open scholarship and credit as Heather said to us. We're also happy to hear what you think we're missing so that we can go back in the next four days and get in all the things that we might be missing before we turn it on Friday. So if there's particular research questions that you have in your heads about how we and how we're going to get into the next four days we're going to do the work to figure out how to align with our research goals and our library values we would be happy to hear them. Thank you. I do see I think these lights are really hideous. So I do see somebody over there at the mic but I wanted to add one more process thing that I forgot to add which is all 27 of the proposals that were accepted. Those folks are really excited about how we're going to get into the next four days. They're accepted. Those folks are frantically finishing their 10 page papers as well and all of the papers will be published openly by the MIT press on the public platform probably in early February. So you'll see a whole slew of papers on some aspect of generative AI and roadmaps or action plans or impact papers on a range of topics. Great. Hello. Lisa Janneke-Hinschler with the University of Illinois at Urbana Champaign. I want to begin my question by stipulating that I think all of the things that you've said about ways that AI could enhance the research process, the scholarly communications, all of these sorts of things sound excellent to me. So my question, I just want to stipulate that because I want to be clear that I'm not, that's this predecessor to the question. I want to be clear on that because I want to be clear that I think these are great things that AI potentially can do. What I'm trying to sort of understand a little bit better is your sort of theory of impact or theory of causality. Because I personally don't think that trust in science is publicly being eroded because the public is concerned about the burden on researchers to do peer review. They know about that. So tell me a little bit more if you could about how all these wonderful things are going to counter the things that are causing the erosion of trust. Which don't look to me like the burden of the peer review system or some of the other things. I'm not saying you don't have a theory of chain. I just would like to hear it. That is such a good question. That is such a good question. Let me give it a shot. So one way that public trust in science is eroded, it's not the only way is when there are these high profile instances of research misconduct. And I do, my theory is that some element of that that a healthy peer review system would catch those things. We all think that sort of healthy peer review system can help catch research misconduct before it becomes a scandal on your campus. If generative AI, so it's a long theory. And one of the reasons why peer review doesn't always work the way it should is because there are these known problems in peer review. There's too many papers, not enough reviewers, reviewing is hard, blah, blah, blah. So if we could make peer review more effective and more efficient, then maybe, so it's not a clear straight line. It's a little bit of a wiggly line, but there is a theory there. There's another answer and now I can't remember it. Yeah, tag. I think there are a lot of elements of the eroding public trust in science that generative AI won't solve. And I think we started from the premise that there are some things that can be made better with the application of this technology. And it may not get at some of those deeper issues. Those might be societal issues. They might be generational issues. But there is a very real problem that researchers and scientists face right now in just managing the requirements, the policies, et cetera, that have been set out to try to preserve the integrity of research. And folks are falling short and it's through no fault of the individual. It's just that data management is a massive undertaking, as everybody in this room knows, is keeping track of all of these requirements, that all of that is just a huge amount of work. And we do see potential for generative AI to lift some of that burden and participate with humans in the work of carrying out research. But I think you get at a really critical and important issue that we're not really treating deeply in this paper, which is that public trust in science is declining for a host of reasons that generative AI may not effectively address. But our paper will be wound among 27. Some of the others which might get into the ethics of generative AI, the impact on society in a broader sense. But great question. You have to ask the sociologist the theory of impact. Yeah, exactly. I just want to add one thing, which is, and I think Lisa just sort of hinted to that at the end there, is I think one of the things that we've realized through the course of writing this paper is that we are not the experts in trust. And so what generates trust? I don't know what the right verb there is. Our research could be of integrity. That doesn't mean it will be trusted. Trust is another thing that lives in another discipline. And so I think as we think about who we're inviting to these invitational workshops in the summer, I think having somebody who's field, we have a lot of expertise in libraries about disinformation and how that relates to trust. But sort of understanding where trust comes from and how trust is created and how trust is eroded is a piece that we aren't actually dealing with in this paper. But I'm hopeful that when we get to the invitational workshops over the summer and we talk about how to execute research on these topics, we'll have experts there and those other disciplines that we can explore with. You're stuck taking the next question. There's somebody there. Go for it. We can't see a thing there. We'll alternate. Just start talking. Hi, Dan Cohen, Northeastern University. Cliff brought up earlier the problem of bad actors and a kind of adversarial relationship with good actors who try to do things lists like this, but then they learn from it and they alter their behaviors. And so I'm wondering for the report, is it possible, like if I was Dean of MIT Libraries but evil, I would look at this list as a, sorry, Chris, I'm not, I'm your colleague from across the river. I'll take that to mean you don't think I'm evil, so I think it was a compliment, Dan. It was always, always. I just wonder if you need to kind of red team the report to say, well, what would someone do a bad actor if they knew that reports would be autosummarized or if they knew that the peer review system changed in this way and you've outlined it. I wonder if there's some thought that needs to go into that sort of situation. Would the evil director answer? No, I think that's such a great point, Dan. And I mean, I think there's room for that in the way that we express the need for sort of careful research and development. If someone's going to develop an AI tool to do this with peer review, then it has to account for these values and how do we, we're speculating on and sort of challenging the research community to develop tools that would adhere to these values. But I think you're right, like the red teaming it is a really good idea. And yeah, I think we could probably get some colleagues at MIT and elsewhere, perhaps in the area, to help us with that. I think that's a great idea. Oh, Steve from Yale. I'm sorry if this is a TLDR question. We're from Yale, so we love our words. So one of the things that we did at Yale was we brought up a hospitality chat bot. Its entire purpose is to be able to ask questions like, where can I find chicken nuggets tonight? And it would tell you where the chicken nuggets are. It's a very innocuous and small start. One of the things that we noticed was we said, where can you find chicken nuggets? It said, Branford College. And Branford College did not have chicken nuggets at all. And it turned out that one of the source material that we were pulling from, which was our hospitality website, on more than one page, Branford College and chicken nuggets were somewhat close to each other. So it said, well, clearly Branford College has chicken nuggets. So the take home there was that the data is critical and clean data and having a clean corpus is paramount. So my question is, it's a data management issue and libraries are kind of good at that. So is there an opportunity for libraries in the data cleanup? And it could take the form of metadata. It could take the form of semantic search capabilities. But what opportunities are there for the library to make sure that we're starting with a clean corpus of data when we ask important questions, such as summarize this scholarly journal article. Now I'm hungry for chicken nuggets. So thanks. Yeah, that was actually one of the inspirations way back at the beginning of this process, was thinking about what are our strengths as libraries? What are we good at? What is our groove? And we have centuries of experience in managing very large sets of relatively clean data. And that is of critical importance at this stage in Gen AI's development because what's happening right now is all of these large scale, large language models are training on just raw internet because that's what is most easily reachable and that's why you see the types of the results that you see. They reflect human biases. They have hallucinations. They're all over the place and there's no accuracy because right now each of these teams is just reaching for the most readily available data and I think it's a call for librarians and a call for libraries, especially academic libraries who have relatively larger resources to think about how can we make it easier for our data, for our collections, for this vetted information to be used in the development and the training of these models. So I know that there are a lot of projects being presented at this very CNI on that very topic, which is fantastic because I think that's going to be the next evolution in the growth of this technology. Thomas Padilla, the Internet Archive. Hi. Hello. I guess it's a great presentation but I'm just thinking about organizational capacity. Libraries as they are, libraries as they could be. A great set of challenging areas but I'm curious to what extent you all have been thinking about in practical terms like how libraries might need to resource themselves to either lead or follow or partner on addressing some of the challenges that have been outlined. Yeah. I think partnering is one of the key words in that question slash wonderful statement because the scale at which development is happening is larger than any single library. The scale of data that's being managed, the scale of compute, the scale of all of it and our strength lies potentially in partnership and collaboration and building larger networks so that we can address all of these challenges at the same scale that increasingly corporatized information entities are able to already. Can I pitch to you, Chris, on that? Yeah. I mean, well, you did a great job but yeah, I mean I will say that some of the feedback that we got on our proposal in this, you know, there's a whole review committee that reviewed all the proposals was in fact that, you know, one of the strengths of our proposal was the sort of convening power of the libraries on campus as a place where all, you know, faculty and students from all across the institute are willing to partner with us, right? So we're not, you know, we're not a siloed disciplinary space. We are the interdisciplinary place that can provide that place of convening all the different kinds of expertise and resources that might be needed to tackle a particular problem. So I think you're absolutely right. It's the partnership and again, you know, the libraries at least didn't, you know, that was the feedback we got is like, yes, the library should do this because you guys are the central convening power on our campus. Last question that just clicked over to 00 on the timer. But go ahead, go for it. Boyan Kim, University of Michigan. So thank you for your great presentation. It was such an inspiration to see a model like this that is taking place. Unfortunately, and many other institutions, the effort isn't quite at this level. Some places have more office of research leading the efforts. And libraries are kind of like not sure how to, you know, come into the partnership what to bring that is uniquely, unique strengths from our libraries and things like that. So I was wondering if you could like provide a little bit of insight or recommendations on that. And also I would be really interested in hearing about how you plan to bring this forward in partnership with the office of research at your institution after this. I didn't catch the last part. How we're going to do. How do you plan to sort of move this forward after this work is done in collaboration with the office of research at MIT? Thanks, Boyan. That's a great question. So we are working in partnership. So MIT, I think, actually is catching up probably to some of the other institutions represented here when it comes to research computing and computing management. They just founded the first office of research computing and data about one year ago. Actually, they had their birthday in October. So we are working on building a partnership. We have a partnership in place that we're growing with that office that's specifically focused on data management, data best practices, focused on the fact that data is the lifeblood of AI. So MIT has made a call for researchers across the institute in each discipline to apply computation and AI methods to their research. That has drastically increased the amount of data being produced. And there was a very strong need for a centralized research computing data management hub to manage all of that. So that's the partnership that we have in place with the office of research computing and data. And right now, to be very honest, it's focused on building infrastructure and just building up compute power. And then we'll be able to turn our attention more to the nuances of data management. Yeah. I think the other question is sort of how do you... It sounded like part of the question was sort of how do you sort of raise the profile of the libraries enough that other units across campus see you as intellectual partners in this endeavor. Is that a rough translation? Was that a decent enough paraphrase of the question? I will say that it's been a multi-year effort to establish that kind of reputation, but my theory is that a lot of it at MIT has to do with our work on open scholarship. And the connection there is that by being so deeply invested and becoming such known as sort of experts who understand the research publication system, understand research data, and again, we have expertise across the entire research life cycle that there's sort of a recognition that we are the place that understands that broad view of how research is conducted across many, many disciplines. And so that kind of gives us an in there, I think. And it's all about having amazing staff who have relationships with faculty across the institute. This is not a question, just a confirmation. You mean summer 2024 since it is December 2023? Yes, thank you. Just wanted to make sure I wasn't crazy. Thank you. So only 12 people have looked at these slides and nobody has noticed that before. So thank you for correcting that. Chat GPT wrote that slide. Thank you all very much. That's awesome.