 I'm Cliff Lynch, the director of CNI, and let me welcome you to our fall 2011 member meeting. I'm delighted that so many of you are here. We've been blessed with pretty good weather. I'm not aware of any major weather hang-ups going on around the country right now. I am, of course, concerned that we are building up a karmic debt and, you know, thinking about record snow in April or something for our spring meeting in Baltimore, but anyway, we'll take it while we can get it. I have a number of logistical things I just want to touch on before getting to my main remarks, and for reasons I'll get into in a minute, we have a little less time than usual, so I want to kind of move things along. So I would like to welcome you all here. I think that you're going to find this to be quite a memorable day and a half. I'd like to especially welcome a number of international members and guests that we have joining us. It is not always easy to do those international trips, and I am grateful that they could make the effort to be here with us. We have a number of new members or rejoining members that I would like to take a minute to recognize. These include Lafayette College, the University of Wyoming, the California Polytech, Wichita State University, Southern Illinois University at Edwardsville, the University of San Diego, Coopley Library, and Wake Forest University, and I welcome those institutions. I had an opportunity to meet reps from at least some of them at the new members session earlier today and it's great that they are here with us. This has been, there are a couple of things that have been changing that I just want to make a quick note of. Over the course of the summer, we did a major rebuild of the CNI website. Many of the sort of visual characteristics are maintained from the old one, but I think you will find that the new one is a great deal cleaner, more convenient to use, and much more effectively navigated. As part of that, we also cleaned up a number of other things. You will note that this fall, we finally took our fax machines out to pasture and went to an email-based registration system. There were actually some reasons why we kept the fax machines as long as we did, not that we liked fax machines, but anyway, that's all over. Similarly, I will not make mention of evaluation forms in your packet because there aren't any. You will get one of those in email in due course after the meeting. There have been a number of these kinds of small things that we've had an opportunity final. Well, actually some of them, like the website, were not so small, but some of these important maintenance things that we have gotten done. Despite the favorable weather, I want to just make mention of the message board by registration. If there are changes in the breakouts for some reason, we will post any changes there so that you know. There is also information on wireless connectivity if you need it at the registration desk. There's a winch page handout available. The last thing I just want to mention in terms of housekeeping is that we've changed the format of the meeting a little bit. This fall, we got probably 50% more proposals for sessions than we've ever gotten before, plus there were a number that we wanted to invite because they dealt with things that we wanted to get on people's radar screen because we thought were important. There was just no way we could fit them in the number of breakout session slots available. We debated what to do about this for a while. We considered having several series of breakouts starting at like 6 tomorrow morning. That was probably not optimal. What we ended up doing is tightening up some of the breaks this afternoon, notably the very long break that usually follows this session, and slotting a third round of parallel sessions in. We'll still start the reception at about 6 o'clock. The end of the day stays the same, but what you'll have is three rounds of breakout sessions with somewhat shorter breaks between them rather than two rounds of breakout sessions and really long breaks. One of the upshots of that is that I really do need to be finished here at 2.15 promptly. I do want to allow time for questions, so I'm going to try and limit all of my remarks to about 40 minutes here, so this should be good. What I want to do in these remaining 38 minutes or whatever I have is really to give you a bit of kind of the year in review and look at some of the major trends that we're tracking and to talk a little bit about how those connect up with our program and our program plan. You'll find our 2011-12 program plan in the white packet that you received at registration. It is also, as of this morning up on our website, and I'll just note that if you would like additional copies of the glossy printed version to share with colleagues at your institution, just let me or Joan know and we can get some for you. I know that some of our members have found that to be a useful internal communications tool. That's what I want to do in the next little bit. I sort of struggled with the place to start. There are these fascinating macro trends that are going on right now. This whole sort of debate about how much higher education should tilt towards the vocational issues around the legitimacy of various areas of study, particularly when they are subsidized by public investment. The whole question of how you demonstrate and quantify economic impact, job creation, all of this stuff is swirling around at a macro level and framing a lot of things that we have on our plate. I don't really want to spend a lot of time on that other than to note that I think there really are some very serious conversations going on that will shape the broad future of the higher education systems, especially in this country. But also to note that if you look at the institutions represented here, they are a relatively limited part of that big higher education system. I think that while they will not be unaffected that they will have more flexibility I think to choose how to respond to this than some other parts of the system. I think the place where I have concluded that I want to start in terms of things that are really directly on CNI's agenda is to sort of note that this has been the year when big data became fashionable. How many times have you heard about big data in the last six months? It's on the cover of things like the Economist, you have the New York Times running pieces on big data. This is really an idea that I think has started to capture the public mind in very interesting ways. It's certainly an issue that we here have been out in front of for a number of years, but it's one that's taken on greatly enlarged dimensions and that you can actually talk with people about now outside of our profession. I think that there are two lessons kind of in the year of big data that we want to be mindful of. One is that not all the data we care about is big data. It's wonderful to have these sort of conversations about how my exabytes are bigger than your exabytes. I think that when we really go out and look at what's happening in the world of data intensive scholarship, data intensive scholarship is not the same as big data intensive scholarship. Big data intensive scholarship is a rather narrow slice. So much of the information that needs organized and cared for really fits pretty comfortably on smallish disks and lives in tools like Excel spreadsheets. And I think we need to kind of remember that that data is really in many ways the most intellectually challenging because it is so diffused throughout the system of scholarship. And because in many cases the investment sort of per data set is substantially smaller. I mean, if you're signing checks for a large Hadron collider, you know, you're going to ask where the data is going. Now we've gotten to the point where if we're giving out grants, we ask where the data is going in terms of the NSF data management mandate, for instance. But really we're going to be moving on into scales even smaller than that, I think. So that's one thing I think we need to keep in mind. The other thing I think we need to keep in mind is that coded into this kind of big data rhetoric is a lot more than data intensive scholarship. There's a whole agenda of machine learning and prediction and classification that is implicit in a lot of this discussion of big data. In other words, you can make scholarly discoveries certainly through machine learning and classification and finding correlations and relationships, but you can do lots of other things. You can run societies, you can predict, you can sell stuff, you can sort people in different ways. And I think that one of the things we're seeing and one of the reasons there's so much interest in this suddenly beyond the scholarly world is that people are recognizing that this is a very, very powerful set of technologies and tools for a whole variety of commercial and governmental agendas as well as simply scholarly ones. We've even seen that in the emphasis in our own institutions now on analytics, classroom click streams, predicting students at risk. All of a sudden, we're moving from scholarly use of very large data to very operational kinds of things, and these come with their own very real issues I think about privacy and about what uses are and are not appropriate. We had the National Science Foundation mandate go into effect in January. So the campuses represented here, at least from the U.S., have now had just about a year of experience dealing with this. We're still trying to understand a lot of things. We're trying to understand exactly how review panels treat these. We're trying to understand what guidance to give to faculty who are preparing them. One of the things that has been notable in the sort of collective response to these mandates is efforts not just to develop sort of general best practice, but actually to develop tools that can be customized for different funding agencies and for different campus environments that actually will assist faculty in developing these plans, and I think that's a very high leverage kind of activity. I want to note in this connection also that there are two calls which I have shared out to the CNI Announce List from the White House Office of Science and Technology Policy. One call deals with public access to journal articles describing research that has been publicly funded, the NIH mandate and where that should be headed and whether it should apply to other fields. The second, which is the one I want to really bring to everybody's attention here, is a call for public input on data sets that are produced as part of publicly funded research and what policies should accompany those data sets, how they might best be preserved, whether disciplinary or institutional strategies should come into play here, the role of government, and interestingly some questions that I think are exceedingly difficult to answer that deal with the economic impact of access to this data. Does this help to grow businesses to create jobs? The cutoff date for comments on that is early January. As I say, you can find links in the CNI Announce Archives or a number of other places, but I'd urge you to think about whether your institution or you as an individual, it's open to individuals and institutions, want to express a view on this. I think that in this room, certainly and in the institutions represented in this room, we have an unbelievable depth of knowledge and insight about these issues and they are indeed important ones. I also want to note in this connection a couple other things that are going on in the big data world. There, a couple of years ago, the National Academies established a board on research data and information. It goes by the acronym BIRDI. That was initially chaired by Michael Esk of Rutgers. He has rotated off and I've agreed to co-chair this board for the next couple of years with Fran Berman who is the former director of the San Diego Supercomputer Center and is now the vice president of research at RPI. I think this is another opportunity to look broadly at policy ramifications of a lot of the big data movement, a lot of the issues that we're familiar here with dealing with data curation and data reuse and I would urge you to look forward to that group also helping us to connect up with other policymaking and scientific and scholarly communities to help understand this. There's some other real interesting stuff happening and I don't have time to go into huge detail in this but I think we'd be remiss not to note it. We had a plenary here not long ago from Liz Lyon from UKOLN where among other things she highlighted some of the developments that she had been tracking in what's often called citizen science or public engagement in science. I think actually Bill Mitchiner tomorrow in his closing plenary will probably touch on some of that as well. It's very interesting to see how this is gaining momentum. There's clearly a movement here with some significant force behind it and I think it's one we need to track very carefully because it promises a kind of a broader reconnection between academic research in many areas and the broader population in the U.S. You might recall, for example, a system called Galaxy Zoo which I think Liz had a slide of or you may have seen at some other meetings. This is basically a system that teaches people to classify galaxies and then they are shown galaxies and classify them appropriately. So this was such a success that the people who did this have gotten a bunch of funding to build a whole series of other games basically taking their experience in packaging up this kind of citizen engagement and applying it to a range of other fields. Fascinating development. The other one which really has gotten my attention is the whole set of issues around genomics and especially personal genomics. The cheap sequencer has become, you know, the CIO's latest nightmare joining things like backhoes and other things and haunting their dreams. These things, it's getting really cheap to sequence genomes and these things pump out horrendous amounts of information like around 30 terabytes per genome before it's matched up and paired and reassembled when you get all through. You only have a few gig out the other end but the kind of raw data that needs to get fed from the sequencers to the computational apparatus is tremendous. And if you look at the curves here they are much worse than the worst law curves. We are developing the ability to sequence much faster than to compute and reassemble the sequences which is an interesting development. Now what's happened is that there is a whole set of genomically based medicine which basically to a first approximation says what you want to do is get sequences for individuals and we can't quite afford to do this yet at scale but it's clearly coming within the next few years. So you want to get sequences for people including all the variations in that sequence that make them individuals and then you want to get medical records. And then what you want to do is do the most amazing calculations you can imagine over you know entire populations of genomes and medical records and try and figure out what patterns of variation correlate to what conditions and what treatments are effective given various kinds of conditions and genomic variation. These are computations on a scale that are just mind boggling and as you get into complex things where it's not just one variation basically causing a disease but maybe 50 variations causing a statistical predisposition to one. You need the resolving power of enormous numbers of observations to do this. So we're starting to have a series of policy conversations bubble up about who owns your medical records, who owns your genome, is it reasonable to link them and how anonymous can that be if you want to provide these huge sets for data mining. And the answers here are needless to say quite complicated but also are they look like they're going to be a gating factor on biomedical discovery in the next couple of decades. I was in the U.K. last week and while I was there the prime minister basically made an announcement that it was his intention to see that all the medical records in the national health service remember they have a national health service. We just have enormous hospital complexes so you have to deal with several of these. But basically everybody was going to be a research subject unless they opted out. All these records would be available for computational mining and that's a wonderful thing and it's also kind of a disturbing thing. You may have noticed that right now you can get yourself genotyped for a couple hundred bucks from 23andMe or a number of other companies and there are people who are taking the position now. I'll put my genome up. Some of them will say sure I'll put my medical records up too. Most of them right now are probably the genomes enough. I can readily see that there's going to be a very interesting set of conversations showing up here that bring together policies around data mining, around privacy, around data curation that are going to have some fairly high stakes for the research enterprise. I just ask you a couple of questions. If you were going to put your medical records on deposit and side question, who owns medical records for dead people? You can compute on these just as well after people are dead. It's still good data. Where would you feel good storing them? Would you feel better storing them with your insurance company or your local library? Just something to think about. I will leave that there other than to say I think that there's a lot of interesting issues that are going to show up. Big data is rippling in a lot of other directions too. For all the progress we've made in high performance networking, data seems to be growing way faster than the networks. You're seeing a lot of investment in trying to think about how to move data from place to place. You're certainly seeing conversations also though about data that's too big to move. Once it gets there, it has to stay there because there's no way we can move it other than maybe ship a lot of tapes on an airplane or something. Even the time to write the tapes is intractable in some cases. This is very interesting and it connects up quite directly to some of this talk about clouds and cloud services. When you start talking about clouds for very high end data, it matters which cloud you're in. You don't move from one supplier's cloud to another casually. The bandwidth constraints and the time it takes to replicate data is such that there is some significant lock in to be dealt with if you're thinking about clouds as a potential solution here. We see this propagate in lots of other directions too. Just to name one, access management. We've taken the first steps around access management with federated identity management. When you start thinking about custody of complex research data across time, you actually wind up with a nasty mixture of individual and role-based access rights that cross institutions that need to be preserved here. We know very, very little about this at this point. We don't even, I think, have particularly clear statements of the problems, but I think we'll see work showing up in that area as well. Here's another sort of large-scale development that I'm seeing, and I'm seeing it with some momentum in it, connects up just like big data to a range of agenda items for the coalition. This is what I would call the new scholarly communication movement. There have been a whole series of meetings and workshops over the past year, year and a half, a sort of a motley self-declared crew of technologists, scientists, publishing people who have started to come together and really talk about how we take the next steps in changing the scholarly communication system, except that the frame of the conversation is quite different. This isn't primarily an economic conversation. The agenda here is not open access, although I would say many people come to these discussions with a strong bias that says that ultimately a system that is open access-based, however it's funded, is liable to occur for other reasons, but instead what these folks are interested in, I would say, is really sort of a tripartite agenda. One part is sorting through the relationships between scientific publications, and I would say these folks are mostly focused on the sciences, not the humanities. They're interested in the connection between scientific publications and the underlying data, how those should be related, the tools that would be used, the citation or pointer mechanisms that should be used, all of that. Tools are a very real issue because you do want to be able to do at least some simple things that are sort of like computations localized in the paper, like graphs of a data set. You really want that to just sort of be automatic in there, but also manipulable. So the question is, how do you get from here to there? The second piece of the agenda is really about formats and, again, about tools. It doesn't have a clean break line with the questions about data, but it's about how you write papers that don't simply emulate the papers of the 1920s and 1930s, how you write papers that are enriched by being read in a computationally mediated way, and papers that understand that they have readers that are human and readers that aren't human, but are computational. Then the third piece of this agenda is a set of issues about what I loosely call peer review and impact, and another way to think about them is helping people to allocate attention. And helping people to understand the impact and reach of their work. So this includes thinking about various kinds of measures of impact, but one particularly salient kind of idea that seems to be emerging is the notion of people taking some responsibility for their own kind of scholarly identity, which includes their bibliography, probably some brute facts about their biography, and connects all of these up. And it's quite striking to look at developments across Google Scholar, Microsoft Academic Research, Elsevier Systems, Web of Science, all of these are starting now to give you tools to clean up your bibliography, to disambiguate and authors or identify authors that shouldn't be separate. Unfortunately, right now, they don't propagate changes from one to the other, and we're still lacking some vital pieces of infrastructure here, author IDs for one, and there are a couple of interesting initiatives in that area, most notably ORCID, which are looking to plug some of that gap. But I think that we're starting to see the emergence of a whole set of tools that sort of support this idea of public records of publication and the ability of individuals to take control of those records, and C&I has been quite active in some of those conversations of late. I want to simply note the discussion about outsourcing clouds, scale cross-institutional activities, web scale discovery. There's a whole array of conversations there that clearly have gained a lot of momentum. I think that right now there is at least as much talk as real action there. I think that there are some formidable challenges in the area of understanding risk and understanding lock-in in these settings. I think that we're going to come to realize that selecting and joining clouds is going to have more to do with selecting and hosting communities than we might have realized in the past. Some of that is going to be driven by the realities of bandwidth. Some of it is going to be driven by the enormous diversity of kinds of service that the different clouds are offering. They're not really very substitutable right now. I do want to move on though because there are just so many areas that I want to comment on. The whole notion of digitizing collections and making them available as a basis for a conversation with an audience or a community is one that we've come across numerous times over the last five years or so. The placement of photographic collections on Flickr, these kinds of things, Flickr Commons. It's striking to me how much traction this has gained. There's a recent OCLC study that Karen Smith-Yoshimura and Cindy Shine from Getty put out that does something like 75 case studies of institutions who are using these kinds of tactics of user-enriched collections. I think that my sense, unscientifically is that this is gaining more traction in the museum world than the library special collections world or others, but I think that this is a model that we really need to continue to investigate and exploit and that has some very natural connections back there into public science and into sort of humanities versions of public science, public humanities if you want. Another thing that's crossed some sort of inflection point this last year, I think, has been e-books in popular publishing, if you'll allow me, the mainstream commercial publishing. I don't know what the best thing is to call it, but it's very clear now that the level of commerce in e-books is such that it's starting to restructure a good deal of the economics of publishing. Panic is setting in in various quarters. Folks with some long-standing ambitions are actually having opportunities to see those realized for better or worse. Right now, most of the action and anguish as it affects the cultural memory organization world is limited to public libraries. Public libraries are just having a horrible, horrible time with e-books because the publishers are basically declining to do business with them at all or declining to do business on terms that remotely resemble the historic economics of public libraries acquiring material. This is a real serious problem for public libraries and it's one that's very hard to know what to do about, particularly short of some sort of reinterpretation of copyright law or something. And it's one that I mentioned here in part because I don't think this is going to be limited to public libraries. I think that we are seeing the beginnings of a very nasty mess in mainstream commercial publishing, which includes a tremendous amount of material that's essential to the cultural and scholarly record going forward. These are things that end up in our research libraries as well as our public libraries. And there are things that we're not going to need, we're not going to need just today, but we're going to need two generations from now to understand a lot of what's going on in our culture. When was the last time you saw a coherent discussion of how we are going to preserve mainline commercial e-books? I don't even see those words mentioned in the debate about, you know, about squeezing public libraries out of the e-book market. And yet, that's a debate we have a very high stake, I think, collectively in dealing with. There are other places that we better watch out for too. The used book world is going to go away as another byproduct of this. And that has implications that I think are very substantial for the long-term access to the cultural record. The last kind of event I'll point to in that world is all of a sudden pieces of the market are falling out of the control of the major publishing players, too. You are seeing a tremendous number of do-it-yourself authors, many more than we used to see, and with a much greater reach. We're also seeing successful authors, on the other hand, withdrawing out of the publishing system in some cases. What the implication of this is that it gets harder and harder to track down and identify the scope of these kinds of materials. You know, once upon a time we had tools that helped us with this, that I think are getting very, very shaky at this point. There's more to say about this, but I think the place where I'd leave this is that it behooves all of us to pay attention to what's going on in this area and to recognize that this is not an issue that is going to be limited to public libraries, although it may start there. In terms of preservation kind of issues, there's again a tremendous amount we could talk about. I just want to note two things. The use of social media of various kinds continues to go through the ceiling. People are spending an incredible amount of time on these systems and communicating through these systems. With the exception of Twitter, as far as I know, we have almost no meaningful strategies for preserving any of these, and we don't seem to be making much progress in this area. This is a problem that is getting bigger and bigger, and I think is one that really should be worrying all of us. I also just want to note an event that I thought was quite striking and that snuck by with less discussion than I would have thought. Apple earlier this year, with their transition to OS 10.7, without really making any real announcement of it, pulled the plug on basically all software for their machines that hadn't been updated in recent years, all of the PowerPC based and earlier programs. A remarkably large collection of work. This was one of the most striking examples of deliberate obsolescence on a very large scale that I can think of. What was so amazing about it to me was that they didn't say anything about it in advance, they just sort of did it. People didn't really talk about it that much except for a few people who were tracking what's going on. I think that the scale of this really invites us to ask some serious questions about policies about planned obsolescence and commitments to availability for the operating systems we use. It's getting to be too large a scale issue to ignore. Another way of thinking about what they did is that they just handed us a phenomenally sizable digital preservation problem. All these things used to work just fine. It actually wouldn't have been that expensive to keep them working, but instead we did a very big move all at once into the, this is now preservation material. The scale of this is probably something that would benefit from some study. Just to swing into the last set of comments I want to make from there, I think it particularly bears some study as we look at what's going on in the mobile environment. Mobile applications are very hot. Certainly we've seen a phenomenally rapid uptake in tablets based on Apple's iPad technology. We've seen a very, very fast uptake on smartphones. What we've got here though is many of the worst characteristics of the PC wars from the 80s reasserting themselves with apps that aren't portable, poorly specified things, proprietary things, walled gardens. Yet you see some things that really give me a lot of pause. A very popular activity right now seems to be to take content and package it in an app so that it looks really good on something like an iPad. On the one hand, this is a wonderful browsing experience. On the other hand, you've now tied that content potentially. You've tethered that to a set of assumptions about platform, about stability of platform, about software obsolescence in ways that we spent 20 years learning not to do by developing content standards that are very separate from software standards, things like XML and HTML. I think that it's really worth thinking hard about some of our content strategies as they apply in the mobile environment, particularly for those of us who are concerned about the longevity of content and to recognize that on the flip side we are going to be faced with, I fear, a whole new and difficult set of preservation challenges that arise out of this mobile environment. This is a set of trends that, again, we're going to be continuing to track closely. We did an executive round table about a year ago looking at strategies for campus approaches to mobile technology. It's very clear that institutions are really looking at what is struggling with this question about how far to go down the app pathway and how much to insist on treating these more like little computers and saying use a browser, use content independent, use content standards that are independent of platform. I think given the potential impact of mobility, this is actually a pretty significant crossroads we're coming up to. So these are a few of the big trends that I've been watching over the last year. There are plenty more we could talk about, but these are probably, I think, among the most important. And I think you can see how every one of these ties quite specifically into agendas that CNI has been pursuing over the last few years and will be continuing to pursue into this program year. I would hope that many of the aspects of these trends that I described aren't surprises to you. These are actually things that we've been talking about collectively and thinking about for some years now. And that while they are developments that are gaining momentum and gaining impact, they are not developments that are blindsiding us. And I hope that we at CNI can continue to help to ensure that as we try and understand and shape these developments together, we are not blindsided. Perhaps to use another metaphor that I just loved, we can offer you surfboards for riding the wave of data as a recent report from the Knowledge Exchange suggests of their program to deal with data curation challenges. I'm delighted you can join us for this meeting and I can assure you that for everything I've touched on, you will find lots more detail and lots more insight among the sessions scheduled for the next day and a half. And I've actually managed to finish with time for a couple of questions. Thank you. And there's a mic there or if people want a shout, I can repeat for the recording. The question is whether I could talk a little more about why this new world of mobile apps, why I believe that's going to lead to more preservation problems. I think, I guess to me, it feels like we are recapitulating a lot of the late 80s and early 90s here with an enormous number of applications that are specialized in many cases to more platforms than the market is going to be able to support. I mean, it's very hard for me to believe that we're going to see more than two, maybe three mobile platforms. So what you're going to see, I think over the next five to ten years as things straighten out, I'm actually thinking it's going to be more five than ten because the rate of change just seems to be so fast in this area is you're going to see a lot of orphaned content that got wrapped up inside of applications that want to run on hardware and operating system platforms that just don't exist anymore and that nobody's thought to preserve the content in portable forms so that we're going to wind up with a delightful array of emulators for proprietary mobile phone platforms and things of this nature. That's a lot of what I worry about. Some of this content will be significant. A lot of it, frankly, will be repurposed and it doesn't matter very much. If you read the newspaper on here, it's probably just a reformatted newspaper that's stuffed inside an app and there's a big database somewhere that's the definitive newspaper and the only part you care about is a little metadata that says, and these were the three articles they pumped out as the top of the news last Thursday onto the handheld, but you're going to see a lot of other creative content that will be built natively here and not replicated elsewhere. I fear the note. If you said earlier today that the circulation of ebooks by public libraries opposing a particular challenge, what kind of a model do you think would be sustainable in the circulation of ebooks amongst public libraries? Well, I mean, sustainable is an interesting word there because the question is sustainable for who? I mean, you know, frankly, it's not clear to me why following a model very much like what you're doing in print isn't sustainable, where you actually get a copy for about the same price you get on the standard consumer market, you'll loan it out to one patron at a time and, you know, you can keep owning it pretty much forever. That kind of revenue stream was pretty sustainable for printed works. I think there's actually more profit overall in the scheme for electronic works, and it feels to me like some variation on that ought to be sustainable. I know that one of the major publishers proposed a model where basically they attempted to emulate wear and tear and have the book self-destruct after. What was it, 22 or 26 circulations or something? And that just seems so intrinsically barbarous to have a book self-destruct. I mean, it's just, you know, I just can't see it. You know, maybe going to something that slows the rate of circulation or something, you know, that says that you only get to circulate it every five days or something during the first year you have it so that you have to buy a few more copies of something that's in high demand. You know, I think there's some room to move there if you had to. But I think models that include self-destruction just really bother me. And models that meter are, you know, that provide a disincentive to the library for doing more circulation are really problematic. If you had to put a little friction in the system, I'd think of it in terms of rate of circulation or something like that. I do think there are a lot of other interesting developments floating around there like, given that almost nobody can make a living being an author anymore, and many people are doing it sort of for ancillary reasons, some of them just for the love of it, this begs some real interesting questions about perspective, direct relationships between authors and libraries that I'll be very curious to watch in coming years, too. And I should also say, you know, that the kind of baseline thing I just talked about as a potentially sustainable model, at least to my naive eyes, there's no reason why you can't fancy it up a little to take advantage of situations where libraries want to channel a little more money into meeting demand spikes. For example, as I understand it, it's quite common now, if you get a very popular novel or something, to have a public library buy a number of copies of it, and then in a year or whenever the interest in it dies down, they'll keep one and the other four go out and the friends of the library sale, you could easily emulate those kinds of things in a pricing model if you wanted. I think I have time for one, maybe two more questions depending on how long they are. Do I have any takers? No. Okay. Well, in that case, grab a quick refreshment and remember, this is a short break at this meeting. On to the next session. Thank you.