 My name is Tom Kramer, I'm the Assistant University Librarian at Stanford University, and this is a first for me in the sense that it's not a keynote, but if you open to the very first session at CNI on the very first page, I was pleased to see this, and Sharon Adams from CNI says, you can now say you've headlined CNI. So a new distinction, thank you for those who collated the session. I'm here to talk about an emerging field, I think many of us have noticed that this research intelligence or research information management thing is beginning to be a theme, it's beginning to maybe something that we're seeing on our local campuses as both a need and an opportunity. And at the same time it's something that is bigger than libraries, but one of the challenges that we've faced at Stanford and I think is common is that it's bigger than libraries, but libraries seem to care or are more sensitized to the needs and to the opportunities. So much of what I'm going to talk about is very early stage in its conceptual work. If you came here looking for the answer, I do not have it, and I'm hoping that though we are sitting in a warehouse or a bit of a hanger, by the end of this session we'll have time for some discussion and for some give and take, because I think there's a lot of people here who know at least part of the elephant and one of the things I have found is that so far I can't find the community or the discussions where this topic is really being tackled. So Stanford is, we recently received a new president or we didn't elect the president. Stanford has a new university president, is Mark Tessier-Levin, and he started in 2016 and as often happens with new presidents, wondering a re-exploration and a reassertion of what some of the core missions and values of the university are. And for the president one of the things he did as would be common is he looked back at the founding documents and the founding grant of the university. And Jane and Leland Stanford, when they founded Stanford they were very practical Victorians interested in providing students in California, a very rural place at the time, a practical education by which they could improve their lives and make the world a better place. And there is a key phrase in some of Jane Stanford's writings about a purposeful university is a university that actually has an effect on the wider world. And this is a theme that Tessier-Levin has been extrapolating on during his first year and a half or two years in office. And just a quote here at the bottom is a purposeful university promotes and celebrates excellence as a means to magnify its benefit to society. And this is something that we're increasingly seeing at Stanford, a major research university as well as one that does significant teaching. Now there's a funny thing, I've been at Stanford for 18 years now, 19 years now. And I've been in the libraries for 12. And if you ask a simple question, what is Stanford's research output? There is no answer. It's unknowable. And this is a startling and shocking fact if you think about it, especially from a couple of different lenses. So one's just Stanford statistics from our fact book from 2017. It's about a $6 billion revenue per year and $1.6 billion of that is attached directly to research. So sponsored research in this case. And on top of that we have something approaching about $100 million from licensed technologies per year from our Office of Technology Licensing and about 6,000 sponsored projects. So these are things for which we get extramural funding coming in per year. And like all of our institutions, I think we have great identity management and HR systems. We have pretty good financial management and grant administration systems because we have to. And we know very intimately who was at the university in a formal capacity. We know how the university operates, what money is coming in, how we spend that out because we're heavily regulated and we spend a lot of time on that. If you try to consider or do a report on what the university produces, not seats in the classroom or number of degrees minted, it becomes and there's a immediately becomes a discussion or an assessment about intangibles. We have no global idea about how many research articles are produced. We don't know how many books are written. We don't know how many data sets are minted, how many data sets are published. We kind of, we do know about patents because there's a whole process and an IP regime that's in place there. But in general, as a research university, we don't have our arms around our biggest product. And I was trying to think about, well, this seems a little bit crazy. And to me, it seems like having a baseball team, and this is from University Archives, just part of the libraries, where you know all of the individual statistics, you know the lineup, you know the schedule, but you actually have no track of the team scores by the end of the season. Or maybe as we're a knowledge organization, it's akin to having a newspaper that has got a staff, articles, and daily readers, but doesn't track circulation and doesn't keep any of its back issues. It's just kind of, they go out and if people save them locally, that's great. If they don't, well, then it wasn't the university's concern. Not a great way to run any enterprise, I think, arguably, especially not a knowledge enterprise. And what we're seeing at Stanford and beyond is that knowing the what is increasingly important, where we can't just rely on the global scholarly commons and scholarly communications network for diffusion, to run the university enterprise in an effective, efficient, targeted in a purposeful way, it seems like we should know what we're producing, who we're producing it with, and what the impacts are. So these are important for things like general awareness, increasing number of reports for compliance, demonstrating impacts within the institution or external to the institution, doing trend analysis, things like rapid response. One of the issues when the travel ban was first implemented, affecting travel to and from the United States from seven countries, we knew immediately what students were originated from those seven countries, because they were in the student administration system. We had no idea what faculty or what academic staff or projects the university had engaged in, where faculty might be visiting those countries to do research or they may have been doing research remotely on those countries. Yet in terms of a holistic response and an appreciation for the need, this was largely unknown territory and it's kind of managed through words of mouth. There are also things like compliance with mandates and policies, and a growing notion of research networking and kind of evidence-based or data-driven collaborations. Who should we be collaborating with, why? Whether that's two departments or centers within the university to individual research labs, or if that's between Stanford and an additional university or center. So some of the real life questions that we'd been hearing in the libraries and one of the challenges that we had, these questions, when they come up, people don't necessarily think to come to the library. But things that had made its way to our front door over the last three or four years is what research has Stanford produced? What have you done in the last ten years? What have you done since the year 2000? What is the activity of Stanford researchers after they have left the university? So for CTSA, the Clinical Translational Science Awards, one of the ways that Stanford and other CTSA award recipients are gauged is, what is the activity of students who go through that and then matriculate on to other medical schools or other hospitals? And that requires tracking their output not only while they're there but afterwards, right now that's a manual process. What research has emerged from cross-disciplinary centers at Stanford? The former president, John Hennessy, invested quite heavily and really shifted the focus of Stanford in probably the late 90s to interdisciplinary research. This is not unique to Stanford, but we had interdisciplinary centers popping up all over the university, biomedical, law, social science, international politics, with bringing together this fusion of experts from many different disciplines. Did that work? If you actually think about the research that was produced or the impact of those centers, what came out of those centers versus individual scholars from those different domains working individually? How many times has Stanford research been viewed or cited? And who is doing like work? I've mentioned the travel ban. There are lots of cases where people from different disciplines might be visiting a region of the globe and gathering things like climatological data. They could share that data. That's universal, doesn't matter exactly what you're focused on. If you've got rainfall and temperature data, it can be used broadly. And then what's our international footprint? Those are kind of grandiose questions. We also heard kind of more tactical or operational ones is why do I have a faculty member, why do I have to enter my publications data into three separate systems all manually? So from our faculty profile system, a bio sketch for grant reporting and for annual activity reporting, three separate exercises right now. And most of them actually keep a fourth, which is their own personal copy in reference management. Can researchers get automated notifications of funding opportunities based on prior art and prior collaborations? Within the library, what are current and emerging trends and areas of focus? Where should we be investing our collection development dollars? What journals are most important in terms of where we publish and where our researchers are cited? What kinds of research proposals have now mandatory data management plan? What does that data management plan say? And can we proactively do outreach and support to those researchers who might benefit from it? And what research projects might have data that would be deposited into the library's managed Stanford Digital Repository? So there's been a lot of discussion recently around this emerging field of research information management or research intelligence. When I originally started investigating this field, there didn't seem like there was a preponderance of usage one way or another. It does seem like in the last year or so research information management has emerged. I frankly don't like that because it sounds like something that librarians made up and is important to librarians. So we have been using the term at Stanford Research Intelligence, which sounds to me something more akin to what a budget group or a provost might fund. So I'm using the two terms interchangeably. And actually I think it's an important thing to consider is as we think about how I'm speaking with a librarian hat on, how we connect across campus, what's going to sell, what's going to resonate more. The Rebecca Bryant and some colleagues have a wonderful report kind of doing a survey about what are the different flavors of research information management. And it's important to recognize that there are very specialized functions that all kind of draw from the same types of data. And this is this union or this intersection of data about people and departments, data about projects or activities, and then data about outputs. And in this case, the output that I'm most concerned with or we're most concerned with at the moment are research outputs, articles, data, monographs, but it also could be teaching assignments or other types of service or engagement. So if you're looking at RIMs or CRSes, so which is particularly popular in Europe, largely because of some of the regulatory requirements they have to report on research output and impacts, research or networking systems, who's working with whom, research profiling systems or faculty activity reporting, what have your faculty or departments done academically in the last year. Research analytics and research evaluation, these are all drawing from this same set, kind of the same data set or have facets of the data set. It's I think important to stress that it's not a research information management system or a research intelligence system is not a profile system in and of itself, though profile systems can definitely benefit from this data. It's also distinct from research data management, which is thinking about data that's produced in research experiments, how is that managed and published. One of the key notions that is actually emerged somewhat coincidentally but quite beneficially at Stanford is this notion of a research information ecosystem and some partisans in Stanford's Dean of Research's Office have begun advancing this notion of Stanford's research core. And this is considering holistically the set of data that's in CERA, which is a Stanford Research, Electronic Research Administration System, the Stanford Digital Repository, where we save outputs of research and our profile system, which used to be called CAP, where we have things like researcher bios and is supposed to facilitate finding experts and networking. And the idea is that we already have big troves of important and valuable information in various purpose-built systems across campus. And what if we unlocked and actually fed that information to each other? And so this is the core notion behind the research information ecosystem is that we can actually draw those arrows between CERA, the SDR, our financial management system, our people management system, and other systems. And it's being backed by a slowly kind of growing partnership and alliance, which here to four has been unprecedented, which includes the Dean of Research, the libraries, the Office of Research Administration, which is actually part of our Business Affairs Unit and University IT, which runs a lot of the systems. And the idea is to reduce administrative burden and enhance faculty research and scholarship at Stanford. And the fundamental principles are open internal data exchange, reducing the friction of the flow of data between systems and eliminating the need to do reentry, which often happens, increasing creative and analytic opportunities by repurposing the data that you might have for your own purposes, but someone else has a creative need or creative visualization that you never would have thought of, but are still part of the enterprise. And the fundamental principle is leveraging the data that is already in the university. And I really like silos. I have a whole other shtick, which has got a big silo slide on it. And I recently heard, ironically, for the first time this term cylinders of excellence. And I think this is a much better framing and a terminology for how a lot of universities, Stanford included, think about their individual departmental functions, where our libraries, research administration, finance, personnel, they've got to make payroll, they've got to file grant reports, they've got to get proposals in, they've got to buy collections. These are things that we've been doing for decades with very high tuned systems, processes and procedures. And when someone from the department next door comes over to you and says, hey, we would like to help, or hey, can you give us access? That immediately raises kind of warning flags about, well, we've already got a lot of work, why do we want to take your work on as well? And so the fundamental shift that I think we're beginning to see at Stanford is a notion going away from these cylinders of excellence more to this network diagram, where we all have a piece of the puzzle and we all have some of the information within the institutional context, what happens if we try to splice that together? So here are a couple of examples. And again, some of these we actually have hardwired integrations, but some of them are more speculative or more anticipatory of what might happen with a research information ecosystem. So this is the profile picture, or this is the profile page of Ann Arvin, who happens to be a pediatric researcher as well as our vice provost and dean of research. So right now, all of the publications data that is fed into Ann Arvin's profile page as well as any other profile from our system is fed from the libraries as part of a harvesting and publication engine, which we have built and exposed via an API. Right now, the only consumer is the profile system, but it's important to note that one of the most successful features of our profile system is that itself has an API where individual researchers can embed their profile in their departmental web page and we actually get more hits and more views from the API and the views inside our massive farm of Drupal sites across the university than we do direct hits to the profile system. One of the things that we don't have in place now are direct links. We would like richer profiles, so not just citations, but links to full text or data that's produced. An anticipatory link would be to actually provide direct links to the digital repository. A second example is between the Stanford Research Administration System and our digital repository. So through the grants administration process, we know when grants are due, when close-out activities are, what milestones may have been promised and also which grants actually have data management plans. So as part of the research administration dashboard, it's not too hard to envision that in terms of action items or ticklers on this faculty dashboard or this researcher dashboard, they actually get a tickler towards the end of the grant. You are coming to the close of this. Do you have any close-out activities? Did you publish any data or did you publish any articles related to this? If so, here's a hot link to take you directly into the digital repository and you will have already been signed in and in fact, we may have already pre-populated the deposit form with a link from the sponsored activity that produced the grants. Number three, this is something that I think most of our institutions have an Office of International Affairs or something attached for high-profile visitors trying to keep a lens on where are we doing our work. Right now, this is kind of a manually assembled data set that's scraped from various sources. If we could, from the sponsored research, do extraction of place information and be able to feed that into, this is the Go Global Research Portal that just shows where Stanford research activities were happening, we would create an automated and somewhat systematic feed that's still not comprehensive, but a market improvement over the way that this data is currently being entered and managed. Another example would be looking at stars, which is our internal training and certification technique. So if you're handling volatile chemicals or toxic substances in a lab, you actually have to get certified. That's managed through the STARS system, which is a module on PeopleSoft and working between Research Administration, the STARS system in Environmental Health and Safety, we might be able to track what grant proposals and what projects actually are working with these types of chemicals, feed that into the STARS system to auto enroll and then maybe not even give door access to the labs until staff or researchers have been certified to have passed the test in STARS. Again, this already all happens, but they're out of band processes. A fifth example which emerged out of the blue was that like many campuses, we're going through a building boom. Stanford is now four stories tall. It's no longer the bucolic farm that it was even 20 or 30 years ago. And one of the new institutes that are one of the new buildings that is coming up right now is the Stanford Neuroscience Institute and Interinstitutional Interdisciplinary Center for Chemistry, Engineering, Medicine and Human Health, which is Chem H. The administrator for the Neuroscience Institute heard about this effort and said, that's really interesting. I'm doing space allocation and lab allocation right now. Can you tell me who's worked with whom in the past? Because I would actually like to have some data about how we might allocate the space in the most scientific or most productive way. So we're even seeing examples where the ability to understand what has already been produced might inform decisions that would never have been on our radar, at least within the libraries originally. So Stanford is a research enterprise. This goes back to the who, how, but not the what. We have a research administration system which manages grants. We have good systems about people in terms of profiles in PeopleSoft. We have a digital repository and there are also external repositories for hosting digital artifacts and assets. But where and how are we actually tracking our research output globally? Where is this system that has this research intelligence? And it's nowhere in the constellation of our systems and it's, I think, largely missing in our sphere in general. So our answer is Rialto, which is a research intelligence system. Rialto does not stand for anything. It's a totally made up word. It had an R and an I in it. And we were desperate to come up with a name before we had to take it to a committee. So that is the nature of Rialto. I think it also, I hope, can't be copyrighted. It's the upper bank in Venice. It's the neighborhood that is the upper bank in Venice. So that's the high bank. So we have Rialto thought of or conceived really as not a standalone system but as actually five complementary components. So first off, it's a database. It's actually capturing and relating information about people, about projects, and about outputs. Second, we know that there are needs for dashboards and canned queries as well as ad hoc queries about who's working with whom. We have begun to develop stories and vignettes about what some of those use cases might be and haven't yet implemented these but are looking forward to doing that. Third, we're looking at Rialto not as its own cylinder of excellence or its own stovepipe but rather as one of the nodes in that network of campus information on research. So we already know through the harvesting publication data and feeding it to profiles, there's interest across campus in the data that the library is collecting, managing and farming out. So we're going to continue to do that through both integration and a codified set of APIs. I just got an email this morning from the CIO at the medical school who was interested in producing some dashboards about faculty collaborations and faculty output. So we're going to work with him to see what data we can feed them or what API we can expose so they can write their own visualizations. And then finally, because Dean is in the audience and because I also believe in it, it is a face of linked data. A big part of our internal ecosystem is built on understanding and managing identifiers but there are also people outside the university who are interested in Stanford entities, whether that's people or whether that's projects or whether that's departments. And we intend to publish all of this through linked data on the open web. Well, we intend to publish the public aspects through linked data on the open web. We will also have plenty of internal information that we will keep behind our firewall. In a more graphical illustration, what we're really looking for is to relate, report and reuse data about research outputs on campus. And that is both by capturing data that is on campus but also capturing data from these external sources and external feeds. So the digital repository, Sarah, the profile system I've already mentioned, but one of the key things that we need to develop and we need to replace our internal code which is unsatisfactory and insufficient would be how we actually consume data from funding agencies, external repositories, publishers, external identifier services like ISNI or ORCID and the general web of data. And as we very early on to support the profile system ended up building our own publication and harvesting system and about two thirds of the way through that implementation we became aware of symplectic elements which does, by all accounts, we're not using it yet, an excellent job of harvesting data from multiple sources, many of them open, many of them commercial and then doing things like deduplication, disambiguation and then allowing you to enrich it locally. And as we've looked increasingly into this space, what we have found, not surprisingly and perhaps a little bit alarmingly is that some of our entrepreneurial friends in the commercial sector have also understood maybe longer than many of us, certainly longer than I have that there's an emerging market here and there's a real power to thinking about this as complimentary verticals of information. Cylos is another word. Cylos is another word. So if you look at from Thompson Reuters now Clarivate or from Elsevier, you have great article discovery and article delivery platforms in terms of reference management. If you look over at Digital Science which is also developing a really wonderful set of advanced features, you get things for managing your references and your sources, you can graft onto that things like social networking sites who's doing like work and can we link together. It's interesting to see how there's multiple profile systems coming up in both commercial stacks and really what we're looking at is the ability to right now to feed this is automated publication and citation harvesting to feed our research intelligence systems. But once you've fed that information, again there's a growing suite of applications that are really useful for doing evaluation and analysis. What's your research output? Which departments or which individual faculty members are the most productive? If you're doing compliance management looking at Chris systems, again there are lots of examples that are emerging. If you're looking at funding opportunities can you harvest those and do notifications? And then finally because the places need or the data and articles could have a place to go looking at this growing set of repositories which are free to put in. One of the things that right now these are mergly lines and this was an arbitrary the rows that I put in made sense to me as we were doing the analysis and based on how the different commercial sectors were positioning their products. And it's ideally there's a lot of value in these commercial systems and a lot of innovation and a lot of polish. What would be great is if we actually had lines that went across with defined APIs and a defined flow of data. So once data came in or services came in it wasn't trapped there and if you decided you wanted to run different operations on that same data or expose different services that would be possible because the marketplace and the APIs would be well enough to find that people kind of understood it and there would be enough competition to enable that. On a lark I kind of interposed an open stack where if we were to look at kind of open source or open data solutions that are out there right now what are their analogs and where they might fall in this? It's a somewhat sparse list. And I think that's both a need and an opportunity for many of us who are assembled here. And that's not to say that a completely open stack will replace compete or be mutually exclusive to what some of these commercial providers are offering but it's important to have that I think at least as an alternative so we understand the processes ourselves and our capacity. Many of you will have seen this actually. Herbert Von Dessambels showed this in the keynote in December. So this is from a two University of Toronto specific campus that I'm forgetting. Researchers Pasada and Chen which is Elsevier seeking rent. And they've done a different plotting and a really good one about the end to end scholarly workflow and Elsevier's growing acquisition and ability to really elegantly integrate a lot of these services and a lot of the data to provide richer services both upstream and downstream. So a different illustration in terms of the same point. Rolling back to Stanford, we went through this kind of extended process of doing negotiation, putting out fuelers to other departments and units on campus saying, hey, there's this thing called Research Intelligence or Research Information Management. We're kind of interested in this. We have already a core data set. Could we help you? What do you need in this space? And we sort of announced that we had this system Rialto which was a system in concept but not yet implementation. And a very interesting thing happened. So within about two weeks of announcing it, we had five or 10 people contact us from out of the blue that said, we heard you had this great data set and that you can run reports to answer these questions. I've always wanted to know this. Can you do that for me? And so the answer has been maybe or maybe in time in a couple of cases we've been able to help out because we actually have the publications data. But what we've been able to do through this process is to begin to systematically cultivate and document what some of the research or what some of the use cases are. And we've documented these in story, agile story form. So I don't, it's small, but as a university administrator, I need to understand the impact of cross-disciplinary institutions so I can assess their ROI. As a university administrator, I want to measure the impact of research on student publications so I can understand which faculty members in granting agencies produce the most student publications. And the list continues to go down. While we could have sat in the library without going out to talk to people and we may have speculated and got some of these right, it's a really interesting validation and proof that there actually is demand and demand in kind of understanding the holistic data set about research impact and research outputs at Stanford. And right now we have about 20 of these and we're incrementally going out to talk to more people to capture more stories. The bottom one here is as an administrator, I need to understand how the use of building resources and assignments to rooms affects productivity and output. Again, something that we actually wouldn't have guessed if we had only talked to ourselves. So in other words, we've actually now have demonstrated evidence and proof that there are real needs for research intelligence to reduce administrative burden, to feed profiles, CV and biosketch systems, simplifying faculty activity reporting, which we've heard from a couple of people, but not yet the individual department administrators doing or consolidating the activity reports. So we're going slowly on that front. Reporting and analyzing activities, capturing evidence of impacts, evaluating existing collaborations and finding new ones. One of the queries we got is over the last 17 years in biomedical research, have Stanford faculty collaborated more with UCSF or with Berkeley and by what measure? Turns out it was UCSF, but that was the kind of evidence that was otherwise unanswerable. Analyzing funding in terms of output, collaborators and trends, and from just a operational perspective of the library, informing selection and collection development and also supporting consulting opportunities and outreach opportunities, who already has research out there, who has extant grants that have research data management requirements. So taking a step back and thinking about what does this mean from a library perspective? We've always concerned ourselves with acquiring scholarship that is the basis of more scholarship, but we haven't necessarily captured or made it our business to understand what the research outputs coming from the institution itself might be. So for example, our digital repository is there and available for deposit. There is no mandate and the attraction is, or the uptake is far from universal. This is beginning to shift the ticker at least at Stanford to thinking about we need to more systematically understand and capture this. Not simply for the sake of filling up the repository, but for actually understanding what the university is producing. As the knowledge organization that cares about long-term horizon of the research outputs and future scholarship. It also means that traditionally we've been focused as a library very much on the upstream process. How do we inform researchers by giving them information to feed their own activities? Now I think indicates we need to shift our attention to what's coming downstream. After they leave the library, they do their research, they do their publication, which maybe we'll subscribe to or get deposited, actually tracking that kind of activity in a systematic way. It also means that we need to do more than just look at information. We now need to look at analytics and intelligence. So not just bibliometrics, but actually understanding through evidence-driven and data-driven processes about what the impact is and a very heavy dose, a step beyond traditional bibliometrics, which is again, not necessarily a focus that we've had at Stanford. And then a fourth part is I think this changes the nature and role of the libraries at the university where we're widely recognized as kind of an academic commons and a place where you can go to get things like information resources. It's not necessarily thought of in the same strategic way that some other departments might be. And if we're in a position about interpreting this type of data and offering a lens or review on Stanford's output and in a position to help with forecasting and strategic decisions, that's a different place than the library's often is or has traditionally been as being part of what's considered a research core. So we had an interesting case study. And again, I think this just shows some of the power from moving from these cylinders of excellence to understanding that some of this data might already exist. We had a colleague at the university who, this was a quote, I was at a meeting for the Academy of Sciences recently about barriers to international research. Many faculty members were wistfully bemoaning the fact that it would be amazing if we could map or at least count international co-authors for Stanford authors to see how important international research is at the institution. Would it be possible to design a realtor query that could surface co-authors from non-U.S. institutions? So we actually had the bibliographic data from feeding the profile system and it wasn't optimized for this type of query but with some cleansing and three data runs to work out the process. A member of the library's team produced this visualization which we sent back to our colleague and he said, holy, and he actually swore. It was just why I'm not sharing it. But Peter, I've been waiting for six years for what you did in less than a week. This is the richest data set about our global research footprint that we have ever produced bar none. And it's that kind of insight and that kind of utility that is really driving a lot of our interest within the libraries to see what other knowledge and what other insights we can surface. So in terms of realto, our pseudo-implemented system where we have some of the data, we have some plans, we've actually been undergoing data, a data development, schema development and architectural planning for about the last six months. Starting a week ago, two developers began to work on it and beginning in May, we're starting up a larger team for what we anticipate to be a couple of months long sprint where we'll produce what we hope will be a core data set with regular feeds from our PeopleSoft, our research administration system and our profile system. The initial set of reports that we're tailoring to are all stories that's come from our Office of International Affairs because they have some of the greatest need and been very early backers and also if you put stuff on a map, it's a great visualization, so it can be impressive. And that's where we are right now. Once we do that, we're hoping to pause, rest and assess and see what comes next. And I was hoping at this point just through coming a series of different CNI conversations and monitoring things, this would be a good forum for group discussion because I think Stanford is not alone in not having a complete system or even a complete picture of what a complete system might look like and there's some collective wisdom here. So I would be interested in knowing what have we missed so far, what have we not touched on, where are there great troves of data or great troves of systems that we might be able to slot into place? If you already have research intelligence or research information management systems at your institution, where are you getting your data? How are you getting your data? Who is using this data? What are your use cases? What are your success stories or your unmet requirements? And at kind of a meta level, where is the discussion on this happening? As we're continuing to try to figure out what our systems and what our progress and what our approach should be or even what the key data is or what the success stories are, as we're trying to navigate our intra-institutional dynamics about getting people to collaborate on a front that they may not have collaborated before, what works? So maybe we can move into that. Because the dynamic in the room, I've got two more slides and we can circle back to that if people are interested. Stepping back and reflecting, I think there are kind of from an institutional standpoint, there are six critical needs in this very immature field. I certainly feel these from a Stanford institutional perspective and I wonder if these resonate. So first of all, we need to actually understand what the sources, what the needs and what the value of this emerging field of research intelligence actually is. How do those different facets of faculty activity reporting and feeding profile systems and bibliometrics, how do they all feed together? And we need to understand that this is a holistic set of data with a view on the data that could serve many different institutional needs. We also need to understand what's already in the university and we need to understand what we're not capturing within the university through legacy or weight or tradition because we've relied on external sources or it just hasn't been captured or managed so far. So we need to understand both our map and data flows from an internal perspective as well as from an external one. It's clear to me and I think it's probably clear to anyone who's tried to implement a profile system at scale, you cannot rely on manual data entry, though alone. And if we don't come up with automated ways to capture, enhance, and then ultimately share this information, we're fighting a losing battle. And partly as we know from, well, as we can interpolate from that earlier picture with the verticals, there's a lot of automated data being captured by Elsevier and by Clarivate and maybe increasingly by Digital Science and others that enter the field. So understanding how we actually can capture this data automatically is essential. We need to understand how to form new partnerships that are largely unprecedented in many of our research institutions where they're highly customized and proprietary ownership of individual institutional processes and workflows. If we are going to keep data at services, analytics, and knowledge from being locked up in these commercial silos, we need to understand how we can both technically and contractually ensure that data and services are available to flow across different providers, whether they're open or whether they're commercial. And then finally, I think we need to figure out how we as a community are gonna have this conversation and where we're gonna go next. One of the things that's really given me inspiration is informed a lot of our designs and our thinking at Stanford on this is Vivo. And I said this yesterday and I'll repeat it today. I think Vivo has traditionally been thought of as a profile system. But if you talk to Dean Kraft or if you talk to Mike Conlon or if you talk to another longtime Vivo convert, they'll talk to you with a gleam in their eyes about the scholarly graph. This is what our scholars are producing, how they relate to each other, what the departments are, what the projects are. And it's this great RDF vision about how all of this stuff is actually all interrelated. And yes, Vivo currently is a system which is, I would say arguably largely tailored around the profile system, but the data modeling, the community and the backend are worth and capable of so much more than that. And there's actually a lot of activity in that sphere. Stanford has recently bought all in to Vivo and we will be using Vivo as the core data store because of its flexible schema and Vivo ISF to be able to track, relate these different items. That said, we won't be servicing Vivo as the front end because we already have a profile system. So some of our work is coming up with the applications and the APIs to attack and surface this data set. In terms of the community conversation as an observation, I think there is already a lot of good work in the Vivo community. And I would say for those of you who are in it, in the Vivo community, I think maybe we should continue this conversation there. For those of you who are not in Vivo, whether you're running the system or have an interest in the system or not, I think we might also want to continue this discussion in that frame. And as a plug, the ninth annual Vivo conference is June 6th through 8th at Duke University. And for those of you who are here and interested in that, maybe we can organize something. So what time is it? I think that's perfect. I've once again succeeded in taking all the time and making sure that while I welcome any question, we're out of time.