 Thank you all very much for making such a proper return from lunch. I'm David Kramer from Caltech, and I recently knocked out my front tooth as you will notice surfing. So there are certain words that I'm still learning to enunciate again, and if there's anything that you can't understand, just raise a hand and I'll repeat it. My title at Caltech is Visitor to Aerospace, and the Jerocon campus is that that's as close as the president can get to calling me the visitor from outer space with that mini JPL man, and there is some truth to that. Next slide, please. Kristen and Steven asked me to come and give a bit of an introduction into stocks versus blows, partly because it derives from research that I'm going to show you and partly also to give you a bit of a flavor of what cutting edge research from the 21st century is really like and how unexpected those results of the discovery of knowledge can sometimes be. I'm going to start with something from about 1997, which was a seven-dimensional non-invasive early diagnosis of brain diseases from the Biological Imaging Center. It would take an hour just to explain it to you, but suffice to say that this is, you know, you commonly hear the expression dimensions of information is colloquialism now. This is the experiment from which that comes, and the notion was basically that we had reached a point in developmental biology where even with the world's most powerful MRI machine, a Fortesla magnet, we could not actually see the lesions forming in a mouse model for human multiple sclerosis, which really surprised the biologists and chemists and physicists and mathematicians who were collaborating together on this project. So they came to me as they often do when they run into a brick wall and say, we know there's data in there someplace. We just can't figure out how to see it. I realized that the matrix of numbers that we were getting out of the MRI was actually a seven-dimensional problem and that much as if you were doodling in the notebook in your notes today where you're essentially taking my X, Y, Z space and modeling it through time and essentially creating a 2D picture of that, we could essentially do the same thing with n numbers of dimensions of different types of unrelated data. And through Fourier transforms, we could create a two-dimensional picture and this is that. But the interesting side effect of this discovery was in fact it wasn't so much being able to see something with a more powerful MRI machine or an object with a bigger telescope. We've sort of run the gamut of that. And what actually was the case here was we had to figure out a way to pictorially represent qualitative, invisible relationships. And so that allows us basically to come up with a representation where it's a bit bright in this room, but on the left-hand side you can see that the cells are quite viable than crawling over each other and quite healthy. And then the one on the right, it's quite easy to see that in the same area, that is not the case. And in fact, this is where the vision is going to form. Next slide, please. Well, I not only do brain surgery at Caltech, I also do rocket science. And unfortunately, much of what we do in rocket science at JPL is also rather unexpected. And sometimes serendipitous, and I work a lot through things like epiphany. I happen to be up at JPL one day working on something totally unrelated to this. When I got a request from the guys in the high bay, if I would come over and see the lunar platform that they were working on. And I managed to convince them of three things within our two hours of discussion. One was that, yes, the lunar platform is really terrific, but no, Bush 43 is not really going to send men back to the moon. And they needed to adapt this to the Mars rovers and are continuing exploration of the outer moons of Jupiter to have any future for their program. Two, we really needed to get away from the old Apollo 16 car model with traditional wheels that you see on the lockdown stuff. And basically, the notion was that the lunar platform, which essentially was a platform with sort of tentacle-like legs using wheels as feet, was a terrific starting place. But in unmanned space, what we really needed to start thinking was in a more sort of bio-inspired way, as we call it in contact, where perhaps in fact the ankle boots needed to be wrists so that it could detach a wheel and put on a drill or something else. Because it was going to be the actual researcher on the surface of Mars or on the surface of Europa. And then third, that we really needed to get away from wheels entirely in thinking of alien exploration. Because what really counts on an alien planet is essentially a hybridization of what I call the Audi Quattroport computerized transmission with a tread that essentially is like a common garden trowel. Because that's what you're really meeting on an uneven surface in Mars. And the rest of what we consider sort of the tread is a hybridization of a World War II tank track. But the point there is that the rest of the tread surface is essentially you're trying to get rid of all the possible mass that you can in order to essentially lighten the object itself so that you can reduce the propulsion that's required to lift the thing out of Earth orbit and put it into a Mars trajectory. That's where the real work of the lifting, the heavy lifting of the pardon upon comes from. Next slide. So you can see the sort of complexity and the dynamic change that happens now in research where we set out with a hypothesis and it sort of tends to get changed before it even gets proven. And so in the sense of creating basically running an experiment, creating a hypothesis, running an experiment, proving the results, writing a paper about it in this sort of churn, as I hope they can begin to call it. We don't really have a traditional stock of knowledge that we can send off to the library for preservation. And because of my work in the Biological Imaging Center where we were pioneering confocal microscopy, two-photon microscopy, MRI, and along with reagents and imaging techniques and the very first reconstruction of optical sectioning, noninvasive optical sectioning into four-plus dimensional data sets, we ended up with this variety of stuff. And it became quickly apparent to me that a lot of this material was sort of a stock, not just a flow of information and a flow of data, but that we were actually creating libraries of libraries, stocks upon stocks, as it were, of information and data sets, all of which could be remind with other organisms or other organs, in fact. And I tried in the mid-90s to get the Caltech Library to be interested in becoming a partner with us to curate this and begin to create these things, because not only would they have a stock of data from these experiments, but they would also have basically the foundation for future flows of going back and reminding this data with other organisms. And of course, at a time when everybody was trying to figure out how on earth to deal with the firehose of the worldwide web, you can imagine there wasn't much interested in that. But basically, the lab was investing about four million a year in imaging techniques and programs over a decade and creating all these different types of data. And as the hardware and software all began to advance very quickly, without anybody to curate and collect the data, all of this was lost. And though all of this would be very useful today in research with new techniques that we have since developed, we no longer have access to any of this material. And in fact, the pictures that you're seeing here are probably the only examples of any of this research that still exists. Next slide. And to make things more complicated, I've recently, well, we have recently, over in the engineering department, created the world's first real weather wind tunnel, which was dedicated about a month ago. It consists of as outdoors an area as the FAA will allow us to have. It's basically a net with a little tarp over to keep the sun off. We can create any kind of weather that we like, rain, snow, sandstorms. We can create anything with this 10 foot by 10 foot fan wall, which contains 1,300 approximately, individually programmable CPU fans. We can create anything from a gentle breeze to puffs of wind to unexpected gusts to hurricanes if we so choose. And the notion is that if we're going to do truly autonomous work in drones and robots, next slide please, we have to be able to have an environment in which we start with the autonomy, unlike say Google or Tesla or somebody who are basically pulling together gadgets or components and hoping that they can get those gadgets to interact with each other and become intelligent in some fashion. We're actually starting with the principles of neuroscience and autonomy and then figuring out basically as much off the shelf as we can what the hardware and elements need to be in order to create essentially a good capsule or a viable form of metabolism and a physiognomy essentially for whatever that's going to be. And we're doing a series of what are called moonshots. I won't go into those, but you can see them at the CAST website in some detail. The one that's furthest along is the one on the right, which is actually an autonomous flying ambulance that we can send in singularly or in swarms of three or 300 to disaster sites or to somebody who is trapped on the mountaintop and basically bring them back in a horizontal position and they can work together without human supervision to determine basically, particularly in say, an earthquake or situation or something, where to distribute patients to the area hospitals where triage is not backed up, where the vital signs that are being collected inside the vehicle indicate a particular specialist might be, things along those lines. And if we're going to do this kind of thing, basically, obviously we're going to have, we're starting with a six terabyte repository of information to begin with and basically we're going to need the librarians as research team members in order to develop the tools in order for somebody who's working on the autonomy or an engineer who's working on the aerodynamics or a mathematician who's calculating forces for us in order to even be able to have the team talk to each other. And so this time, I've approached the Caltech Library again and saying, look, we've got this really great new machine and we're not going to be able to really start doing serious research on it unless you help us from the very beginning, figure out how we're going to collect and curate and collaborate this flow of data that we're going to have because it's simultaneously an enormous stock of information but it's also basically something we have to go back and constantly review and in this case, evolve. And so with that, I'm going to turn the repository nature and the tools over to Steven. Thank you. I'm Steven Davison. I'm head of Digital Library Development at Caltech and David has obviously presented us with a problem to which we have no answer right now. We don't have five terabytes to spare for every researcher that comes along. So what I'm going to talk about is our current repository environment and how we are starting to work on reconfiguring that to meet the sorts of needs that are going to arise in the future. Turning our repository environment from one that is pretty good at storing and delivering stocks into something which is much more dynamic fluid, which is a flow. As have all of you do probably, your institutions probably have a number of repositories and we'd have a number of repositories, which... You're right. Thank you. A number of repositories for different purposes. We have an institutional repository for publications. We have a digital library repository for digital collections. We use archive space for archival management. And we have a research data repository. And as you can see, all of those are built on different software platforms. Uh-oh, first problem, lots of different platforms, lots of different content types, lots of different metadata types, et cetera. This is not sustainable. This is not going to help us build an environment which is fluid and flexible. In addition, we build lots of services around each of these and typically those services are built natively against each of those repositories. So if you want to build some reports, you've got to build a separate set of reports for each of those repository systems. Again, not to spend sustainable. In addition, increasingly, we live in a networked environment where there are lots of different external resources that we also want to include. And these are just four examples that we're actively working with right now. So Orcode, Crossref, Fundref, OneScience, and I put spreadsheets down the bottom there because OneScience is a vendor that is providing us with data that we want to ingest into our institutional repository with a view to making that as comprehensive as we possibly can. So we are working with them on that. And typically there's going to be some big mishmash of widgets, programs, whatever that we use to build all of these services. Again, very unsustainable over time. So there are four strategies that we're engaged in to try and make sense of this. We're trying to come up with a system which is going to be simple, lightweight, platform agnostic, and open. And the four strategies that we are engaging with to try and make that happen are as follows. So the first one, we want to pull all of the data out of our repositories all the time. So the data that we actually build things against is not sitting in any of those repositories. And we want to store that data in simple open formats and make it openly available as much as possible. Second, already mentioned, continuous harvesting. Every night all of our data gets pulled from all of our repositories and stored in our file system. Store it in some way which is structured, easily traversed, easily queried, easily indexed. Libraries typically in the past have used XML. If you look at that, it looks like XML with curly brackets, right? Instead of angled brackets. It's pretty much the same. This is a JSON file. JSON files can be flipped into XML and back. Programmers like to work with JSON because they're used to the curly bracket sort of notation. And it's really simple. They're all just these are value, element and value pairs here. So easy to read, easy to pass, easy to work with. And the fourth one, re-embracing the command line. That seems very retro. In fact, that looks like a DOS prompt. It's actually not. It's actually a bash prompt in an Ubuntu virtual machine running on my Windows laptop. But there's the same sort of thing in a Mac, the terminal, bash window in Unix. The command prompt in DOS. They're all the same thing. If we can build tools that will run from the command line, they're really going to be simple. And anyone who went to the Carpentries talk earlier today or knows anything about the Carpentries movement, this is what it's all about. It's giving researchers the tools, simple tools to work very directly from the command prompt, one of the principles of the Carpentries movement. Okay, so we are building a set of tools, which we've named Dataset. Nice generic term. We can't obviously read just to make that a trademark. But Dataset refers to a set of tools that we're building that will run at the command line in any of operating system, as well as the data in JSON format. So we have Dataset tools, and then we have the Datasets themselves, which are all of our data from all of our repositories sitting in the file system in the JSON format. These tools read and write to various places, obviously read and write to the file system on the machine that they're running on. They read and write to Amazon S3. They read and write to Google's spreadsheets, because that's typically how people are going to give us data as common delimited files or as Google sheets. And it's how people, it's the easiest way to get data back out to people also. These tools will also not just talk to our repositories, but to any other API out there. And one of the principles that we want is to be able to plug and play the various repositories. So as long as if we, say, change our, if we decide at some point to replace Islandora with Invenio, then we can just use the Invenio API, populate the Dataset, and build our services against it. So that now we just have one set of tools that we build for all these different, different services. And the tools are so small that they'll actually run effectively on a Raspberry Pi. So the four operating systems that we compile against and that we compile these tools for and make available, Windows, DOS, Linux, and the Raspberry Pi, not because everybody uses a Raspberry Pi, but because it's just to really prove that these tools are really small, lightweight, and simple. Okay, so just to review, well, we've got where we are. We have a bunch of trustworthy resources, some of which are local and some of which are in the cloud, various other vendors. We have a series of tools for munching. You can look it up in Wikipedia. I like the definition you'll find there. And transforming that data and building JSON documents for all of our data. Temporary, in the sense that these JSON documents are recreated every night. They can be destroyed, recreated at will as often as we need. And they have a very simple structure. And they read and write to our local repositories, which are really still the permanent place where we keep all of our data. But instead of building services against the repositories, we're now building services against this open data sitting on a file system. So I'm gonna walk you through two different workflows that we have implemented using these tools. One of which is our work with Data Science to populate our repository, our institutional repository. One science gives us that data as common delimited files, which we use tools to ingest. The tool's also query crossref to enhance the data. Two things that we're doing right now is to disambiguate names so that they match what's in our repository and adding funding information, which crossref has that we wouldn't otherwise have. And then we write that as dataset and that goes into our repository. And then, as noted on the previous slide, the services that we build are built against the dataset. So this is fairly straightforward, something that we would have done in the past is just that instead of building maybe a set of tools just for this purpose, we now have a very generic set of tools, the dataset tools that will do this for us. The other advantage of this, doing things this way, is that we have the ability, because these are lightweight tools, to change the workflow so the data doesn't just have to flow in this direction. So you'll see the same sort of activities are on the left-hand side there. One of the groups that we're working with on campus is the Total Carbon Column Observing Network, or TECON, which is a worldwide consortium of sites collecting atmospheric carbon data. There are 22 or 23 active sites, they come and go. And Caltech is the administrative center for this research, for this data collection. And the Caltech Data Repository is the permanent home for those published datasets. And so the data gets piped down to the local group on campus, which is collecting it. And then we have a set of, these are actually specific tools which are written to just marshal the data, written in Python, and they get pushed into data site and to our research repository, which is written, which is an Avino repository. Because our data, research data management librarian is a Python guy, and all of the tools that we have written are written in Go, which is a language written by created at Google in 2009 and it's becoming increasingly popular. He's wrapped these tools in Python wrappers. So all of the tools will actually be available in their native Go format, but also available as Python tools. So he actually has a Python version of these to create datasets. And then it lives in our research data repository. The other piece of this is an automated service, which we've built to enhance the data using cross-red event data. So whenever there is a citation in social media to a data set, a TECOM data set, that metadata, the tools will update the data in the research repository and send an email notification that the metadata has, that this citation has taken place and that the metadata has been updated. So the tools, the workflow here is completely different, but because we have this lightweight set of tools, we are able to build a new workflow without doing a lot of extra programming. And again, we want this to be as plug and play as possible. So we have the research data repositories in the workflow right now, but this should work just as well for things in the institutional repository. So when there are citations in cross-red to the institutional repository, we could do the same thing. We haven't actually done that yet, but that would be the next step, very easy for us to do. So we actually have a team of 25 programmers working on this. We actually have a team of two programmers and two librarians working on this. These are the two developers on the right-hand side and the two librarians on the left-hand side. Caltech, all of our librarians are named Thomas. Tommy has weakened. Tom Morel. So the developers actually, they're computer programmers, but Tom and Tommy are very technical. Tom is actually a PhD in chemistry, came in as our data specialist, and he likes to code, so we're glad that he does, because he can actually do his data work. A lot of it can be done himself, especially as we're building tools that he can then build into his workflow. And Tommy is more of a Drupal guy. But these are the resources that we have. We don't have a lot of people to build complex systems, so we've got to build things, which are simple. So that's the end of my description, except to say that all of these tools are available for exploration and download. They're all sort of in progress. They're all a version point something. I think we're up to like data set 0.9. I'm not sure when we'll call it 1.0, we're nearly there, but they're pretty much bug-free and they're available for exploration. So I'm gonna hand the baton to Kristen, who's gonna talk about stocks and flows in a more general sense in the library. Thanks, Stephen. So I'm gonna talk about a couple of additional aspects, how we're thinking in flow, more flow-like ways. Across everything that we do in the library. And the two areas that I'm gonna touch on are supporting open science and divesting from what I call pseudo stocks. So obviously open science is flow. Supporting the sharing and reuse, dissemination, and manipulation of research outputs is inherently flow-like. We are reconceptualizing all Caltech library services around open science. That's kind of our guiding principle. Presume open, we are reframing everything in the context of open science until we can't, right? But it's working pretty well. It's really quite an extensible metaphor. So I'm just gonna talk about two aspects of that project here, which is cultivating flow skills, which is both related to our staff and to our community. And working toward being able to support a more resilient record of knowledge going forward to support the kind of projects that David has challenged us with. So we've been, like others, deep into the carpentries. We've used this, we've actually reframed our entire instruction and outreach program around the carpentries, which has been great for retraining the liaison librarians, giving them new focus. It's very oriented toward practical, immediately usable skills around things like data management, manipulation, scripting, visualization. And at Caltech, thanks to Gail Clement, who's really kind of the lead developer of this component of the carpentries, which is kind of in an incubator status within the broader carpentries organization, I guess, is author carpentries. So this is to enable authors to participate in open science best practice. So open publishing, open dissemination of data, reputation management, author identities, things like that. And you'll see that many of these lessons that are part of author carpentry, there's a GitHub site that shows all of the lessons that are available or in development, are inherently flow-like. They often combine more than one tool. Moving data, doing things like researcher profiles for funders and things like that. Moving data around in a much more efficient way. Very practical. We've gotten huge uptake among really all communities from undergrads all the way up to faculty with these carpentry lessons. So I think that that's really a way that we're recasting our mindset on the public services and outreach side to be, first of all, much closer to the researcher. Secondly, supporting best practices in open science and more deeply understanding them and how they apply. And thinking in a more flow-like way. So here's an example of a Jupyter notebook that we have archived in our Caltech Data Repository and kind of how these tools are actually starting to be used in practice to create new types of research outputs. So this is a Jupyter notebook. Most of you are probably familiar with it. It's a web environment where you can combine your data, code, narrative text visualizations in an executable platform. So this, you can see it's also, they also post the text to the traditional outputs of the preprint servers. They put the Python data analysis files that you could download and rerun using your own code or using their own code with their code within the Jupyter notebook environment. The raw data files as you see are linked on the Caltech Data Archive, which preserves them but also gives them individual DOIs for reference. And the whole project is also based over in Git where you can link to the web, the Jupyter notebook there so they can get the files this way. But this is somewhat more transient. Luckily, the Invenio Repository platform has native GitHub integration. And so here's kind of the more permanent representation of that Jupyter notebook in the data repository and the notebook itself then has a DOI. So I think that this is a great model for us to learn how kind of researchers that are on the edge but this isn't really the bleeding edge. I mean, I heard recently that the machine learning community is really taken to Jupyter notebooks and that's kind of where they're working right now. So I think it's getting traction very quickly and that we need to be kind of aware of it. So thinking about where most of us are positioned now in terms of how we view the research outputs, it looks a lot like a very disconnected set of flows. So the author is writing in some environment that you don't know but it's not at all connected to the way that the article is published and then the reader may or may not be able to get to it because it's not open. The author, I mean the reader who might want to get the data may or may not be able to get to it will probably have to contact the author directly especially if they want the software and the public can't get to it either. So this is I think the stock-like model of libraries in the past that was paper-based, this was a sustainable model and it was a stable. So the stability, I think we still are mentally associating the stability of the old model with sustainability and it's pretty clear that this is a discontinuous and very unstable and fragile research communication paradigm. So kind of taking that example of the Jupiter notebook and bringing it into kind of how authors are working wanting to work now and I think we'll be working soon. You've got many more authors and they're collaboratively working in an environment like Jupiter Hub. They're also working in Overleaf and Sherale Tech which we support through the library. Enormously rapid uptake of that on campus because collaborative authoring in the cloud in those markup language was very much needed. Then they're of course publishing it as a traditional preprint as well as a Jupiter notebook where readers can get to it, where everybody can see it because it's eventually published open access. And then the piece that we're still working on is this piece so that readers can grab the project, the research from the Jupiter notebook and put it in binder and apply their own data and their own code to that project without changing the original. So the binder is a relatively new project and I think it's very exciting for kind of supporting the reproducibility and also just the reuse of data in a complex research publication like a Jupiter notebook. So we have the Invinio integration for preservation from the Jupiter notebook. What we don't have yet, the piece that's not in place is the binder Invinio integration which hopefully that will come soon because then you could just go to the repository, grab a copy of the notebook and start to work with your own data. You can do it now but it's a little bit indirect. So I think what we see here is a much more continuous flow that fits with research or needs at every stage. It fits with openness, the principles of practicing open science. And it doesn't have those disconnects that you see in the traditional. There's many more stages potentially in participants but it's actually a much more sustainable and viable system. So this is what I think we need to kind of work toward thinking of how we support this. It obviously still has the stock like components because we've got a repository and you've got the traditionally published article still there. And I think that that gives us kind of anchoring in what we know well, redundancy of approaches and so on as the new evolves. So again, it's a combination of stock and flow elements. And so to the second topic of kind of how we're recasting our thinking around stock, I mean around flow. So I wanted to say a little bit about pseudo stocks. This may sound like really an entirely different talk and it kind of is. But I think it's actually related to the mentality that I'm suggesting that we need to start moving away from. So what is a pseudo stock? Library licensed access to content that is widely available, not open and not related to the institution. And I think who's, I would argue whose stockness is really quite artificial at this point. We view it in a stock like way and I think it really is not. And I like to think of it as these are artifacts of a zombie collection building model that libraries are still deeply mentally embedded in and that until we can get out of this, we are gonna find it very hard to support the future of research communications. So obviously the community has been very active in developing organizations and platforms and open access initiatives. And we recognize that researchers are working in many other spaces as well that we have to be cognizant of and having relationships with. But what's missing from this picture? What's missing from this picture is where we spend most of our money. So went to the 2.5% presentation yesterday and one of the questions that they had gotten and their feedback was, well, what about the 97.5% down? What about the rest of it? What we really see at Caltech and I think this is not just at Caltech but to a greater or lesser degree, this is a case everywhere that there is rapidly declining value for where we spend most of our money on traditional mainstream STEM journals largely. At Caltech, about 55% of what we get through subscription is otherwise available open access and not counting SIHOB through other channels. And we know that through data that One Science has given us about level of open access for overlap with our subscribed content. So obviously we're not paying 55% less than we've paid for these things in the past nor any of you. So our money is increasingly not incentivizing open science where it's being tied up in this old model that is not serving the future of research and is preventing us from really seriously moving in another direction both in our mentality and in where we're actually spending our money, what initiatives we're able to participate in. And it's not obviously enabling public good scholarship which is a foundational purpose of research libraries whether they're private like Caltech or especially if they're public institutions. So obviously the stock approach can sometimes be very successful in persisting culture and knowledge over time. And it has a place. And for better or for worse, it can work. And I think that through the value of taking redundant approaches, we will always have a component of what we do in libraries that looks like the pyramids, right? This is the Isshin shrine and somebody who speaks Japanese can correct pronunciation on that. This, the Isshin shrine in Japan which is a very holy site has taken the flow approach to survival over time. It is rebuilt every 20 years and it's been rebuilt every 20 years for 1200 years. So it is very resilient. It's constantly reinventing itself but it still has a consistent and a centered purpose. And the other thing that David pointed out yesterday which was brilliant that I would add to this is that the flow of rebuilding this building every 20 years keeps the knowledge alive through the generations and it keeps the stock alive. So it's persisting the knowledge of how to build this thing as well as the stock of the thing. And I think that kind of gets at some of the examples you were talking about, about why you need to be able to continually not just have access to those images to put people who really understand the purpose behind them and their data in this and what needs to be done. So yeah, obviously libraries were founded on stock but just to kind of balance things out they've also obviously always been about flow. As soon as a reader reads a book, that's flow. The knowledge is going from the book into that reader's head and then it's going out into the world. As soon as that reader works with a librarian in the library that's flow. There's person to person flow of knowledge. And clearly this community especially has been working very actively in the flow like nature of the research life cycle. So it's not like we're not already thinking in flow like ways. We have obviously deep understanding over the last few years, it's been built up and trying to figure out where we fit within the research life cycle which is fundamentally flow like. So what I'm really kind of arguing or what we're arguing here is that we're not going to be fully successful in doing this on the right unless we can kind of get ourselves out of the stock like mindset of our more traditional activities and everything else that we're doing. Collection building, public services, the traditional functions. We're not out of that mindset yet and it's going to hurt us when we try to become the flow like to support the research life cycle. So I've been reading Stuart Brands the clock of the long now and been very inspired by this fast learn, slow remembers, fast gets all the attention, slow has all the power. Right, so the fat, the leaves are falling off every year. They change very quickly. They're very mutable, susceptible to weather, fire. The slow keeps the whole system alive. So the balance between the fast changing and the slow changing keeps the entire system alive. The fast can respond to change while the slow maintains the system's viability. And in Brands framework, this is kind of the range within our civilization plus nature of fast to slow dimensions. Obviously libraries have aspects of many of these but they're really sitting at the cultural layer. We're sitting very close to the root of the tree. We're sitting very close to the forest itself and I think we can be inspired by this model in nature to think about how we maintain a good balance for sustainability for the long run and persistence over the long run of the scholarly record. So that's a little bit of an inspiration from nature. And I think that's where I wanted to let you have a chance to read that quote again because it's really great, it's really powerful. And I think with that we're open up for questions or.