 Okay, well, welcome. I am delighted that you all have made it to the Fall 2018 CNI member meeting. I know it's been a difficult trip for some of you. Some of you have come a long way internationally. Some of you, I know, have had to fight your way through some tricky weather, and I know at least a few folks who hope to be here, aren't able to be here, and I'll have a bit more to say about that in a couple of minutes. I do want to welcome in particular our international participants, and I also want to just take a moment to welcome some new members. The University of Texas, Rio Grande Valley, the University of the West Indies. The University of New Brunswick, Williams College. The University of Nebraska Medical Center. Boston University. I am delighted that the University of Missouri has been able to rejoin us. I'm pleased to have Drexel University's College of Computing and Informatics joining us as a member. The Fenway Library Organization, and finally, Weill Cornell Medicine. All of these new members. I also want to just note, as all of you should have seen when you registered for the meeting, we are now at long last operating under a code of conduct, and will continue to do so going forward. I want to note that there is a message board out by the registration desk. We will post changes there. We've also got a list of the sessions that are being recorded. I do want to note a couple of scheduled changes, and you don't have to take super detailed notes because they will be out there. We have had to cancel a session demonstrating faculty impact, new data and visualization services that was scheduled for 5 to 530. The presenters were unable to get here, however, they have told us that they are going to try and make a recording available, which we'll put on the website. One of the things that you may have noticed is that with the use of SCED, we have been using people's agendas now as a way to try and reduce the number of mistakes we're making about room allocation. Every now and again, we guess that something is going to be more or less popular than it is, and the bad news is we wind up with a room that can accommodate everybody that would like to see the session. Based on some data that we're getting out of SCED, we have moved to sessions. We have moved session digital humanities projects sustainable that was scheduled for the slot after this, the 230 slot, from Senate to Empire. The Empire Room is a little tricky to get to, but there is signage and there will be people that will help you find it. That should give us more room. Additionally, in the 345 slot on Monday today, the update on funding possibilities, priorities and trends is moved from Congressional A to Empire. Again, we hope that that will allow us to accommodate the large number of people who want to see that session. We will post any other changes or cancellations on the message board. I think that's all of my announcements, so now let me get on to what we're really here for, which is to talk about what's been happening and some of the new initiatives that CNI has launched and what we're learning from them. The place I want to start, actually, is with some sort of large-scale technology developments. It seems to me that we're in a period of technology hype. I am constantly seeing things promoting various technologies and I basically react to it by, that's fine, what was the problem? I think we are really, in many cases, sort of losing track of what problems we're starting to, trying to solve in our enthusiasm about the potential possibilities for technology to cure all imaginable ills. I just want to comment on a few particular areas of technology that I have been watching with interests, occasional puzzlement, some excitement. Perhaps the poster child for this is blockchain, which promises to solve all of the ills of the world, society, and everything else. I'm not going to spend a lot of time debunking that. I think David Rosenthal has a session later, which will do that far more eloquently than I could. I will note that the notion of public ledgers that are trustworthy is a very useful idea that long predates blockchain and is not quite the same as blockchain, and certainly we're seeing places where that makes sense. But the notion of these sort of things that emulate cryptocurrencies with brownie points of various sorts as a way of motivating various behaviors I think is really something I'm awfully skeptical of. I am much less skeptical of the discussions around machine learning, although I get kind of disturbed when people equate machine learning to the general AI problem. Machine learning typically is a much more specific and focused activity around recognizing patterns, or in some cases generating patterns, and it can be quite powerful, but it can also be quite fragile depending on whether it's getting data that matches its training sets or that's outside of what it's been trained from. It also doesn't do well if it doesn't have training data. If it's encountering new problems that it hasn't been trained on, it really sort of doesn't know what to do. But when we think about what problems we're going to solve with that, I find myself recognizing that it's less and less about the technology and more and more about acceptable risk in many cases. I'll just give you some quick examples there. This looks like it's very good for doing things like classifying images within certain kinds of parameters and scopes of training sets. It looks like it might be pretty good for doing things like redaction or identifying sensitive material in special collections for human review. But the question there is not, can we do pretty well with it? We probably can. It's, what's the acceptable level of risk in that application? And are we willing, when are we willing to leave this to a machine? And do we need to recheck everything by humans, in which case, why do we need the machine in the first place? I think there are very complex trade-offs there. And I think it's very useful to think about a lot of that in the frame of managing risk and recognizing that there are going to be errors, there are going to be problems with training data scope and training data bias. We've certainly seen again and again how if there's bias in training data that systems faithfully learn to internalize the bias in their classification or recognition algorithms. There are lots of other areas that one hears a lot of excitement about. 5G, telecommunications technology, that's going to be the solution to everything having to do with telecommunications apparently. It does look to me like that is going to create some competition to the home, give some of the cable companies, for example, or run for their money. But it's going to happen mostly in the same populous areas that are covered by the existing cable companies. It's not going to fix the digital divide. They're not going to put 5G in places where they don't have wireless coverage to begin with today because it's not economical and they don't have cable coverage either. So I think it's important to understand some of the constraints on this. We could go on and on with these. Another one that fascinates me is the so-called Internet of Things. And we tend to focus on the consumer side of that, these talking devices that people are putting in their homes where you wonder what could possibly go wrong. And certainly people who are knowledgeable about the computing side of this and the networking side of this look at these things and just throw up their hands at the endemic security problems that are built into all of this as you don't update things. There's also an industrial side of this, though. Consider this. Some campuses now are doing things like saying, well, let's overhaul the entire lock system in all of our dorms and all of our facilities, all of our rooms with smart locks. It's an Internet of Things kind of application, and it's a very complicated one. It's a pretty expensive one. I've heard some amazing presentations on the complexities of that. And that has a very different kind of profile, just like a factory Internet of Things would have a very different profile from the kind of consumer thing that we see so prominently. Let me move on, though. I want to just note a couple of other kind of broad developments before focusing on things that are really close and special to our community. One is security, and this ties a little bit into the Internet of Things concern. We have seen some security problems emerge in the last year that really are quite unlike most of the things we've seen in the past. The classic security problem has been, oh, there's a bug in the software. Let's release a patch, and we'll get the patch on, and that will deal with the problem. We had problems with exciting names like Spectre and Meltdown, which basically are, well, all of the microprocessors we've deployed for the last 10 years basically have an exploitable design set flaw set in them. We're probably not going to be able to run out and replace all of those in the next week, so what do we do? Well, it turns out these are very complicated devices, and actually there's microcode inside them that in some cases you can fix, in some cases you live with it, in some cases you isolate it, but it reminds us now that the world has gotten so complicated, and a lot of the hardware out there has become so complicated that when we get these kind of deep design defects, recovering from them is not a matter of deploying a software patch, it's a matter of managing a problem at a very sizable scale. We're seeing some of this kind of broader term thinking go on in how we are starting to harden cryptosystems in anticipation that at some point we will have quantum computing that will make a lot of our existing cryptosystems vulnerable. Now, the best guess is we won't have the quantum computing for a good solid decade or more, and I refer you, for example, to the recent National Academies report on that, but the thing to recognize is that shifting out a major cryptosystem, like a public key cryptosystem that's deployed everywhere on the net, is a decadal process. So we need to be looking pretty far ahead, and that notion of looking ahead quite a ways, I think, is something that's going to become more and more thematic as we think about things, and I'll touch on some other examples of that. I also just want to note some of the security implications as we're getting better at running old software. We are making a lot of progress on how to preserve software, older software, and be able to rerun it to package up environments and then bring them back and run them. Those environments, I think, are going to need to be rather carefully air-gapped because there are going to be environments that, when we bring them back up, are going to be full of flaws that have never been patched, that have been discovered since. In the same way that it would be a very bad idea to put an unpatched 10-year-old system on the net today of whatever type. I think we need to recognize that as security moves on, it means we really need to think about how we run preserved software in a much more constrained kind of environment. I want to move on and make a couple of comments about stewardship and thinking a few years ahead. Many of you heard the news about Deepin, and I think Deepin, in handling its phase out, is really showing us what's at stake in terms of good behavior, responsible behavior by stewardship organizations. They are not simply doing what some of the platforms on the consumer internet have done. Oh, we're shutting down next week. Been nice knowing you. If you get around to it, pick up your stuff if you have any before we close down. They are actually at Deepin going very systematically through the material that's been entrusted to them and figuring out where it needs to be returned, where there are already other copies held at the contributing institutions, and things like that. I also, in the same vein, want to recognize and commend the work that clocks did recently in their announcement of a succession plan should clocks, at some point, have to cease operation. They are certainly not ceasing operation, but thinking in those terms, those longer terms, is clearly a more and more important thing for us to do in terms of stewardship. Responsible stewardship operations need to include some thinking about their own demise and succession. I want to share one other thing that really has been striking me in this area. I had an opportunity in April of this year to go down to the University of Houston, which hosted the personal digital archives 2018 meeting. It was an excellent meeting. Houston, as you may recall, last August, got absolutely clobbered by Hurricane Harvey, really quite a devastating situation. And there's something very solemn, and it really reminds you of the fragility of things when you visit a site that's undergone this kind of disaster. I remember also visiting New Orleans, not all that long after Hurricane Katrina wiped out so much of the heart of that city. At the personal digital archives meeting, there was a tremendous plenary talk by Lisa Spira of Rice University about an effort to build memories of Hurricane Harvey to capture that right there at the local institution. But it also served as a reminder that we are seeing more and more of these kinds of natural disasters as a result of various kinds of climate shift. I think it's really hard to deny that we're certainly seeing very big, very destructive storms. The wildfires in California are another example here. I was just looking at some studies, looking at the vulnerability of World Heritage Sites around the Mediterranean Sea to an increase in sea level. It really won't take much to submerge a good deal of a number of those World Heritage Sites. I think that unfortunately we are in a little bit of denial, maybe a lot of denial, about how increasingly vulnerable many of our physical sites and artifacts and collections are becoming to these kinds of increased risks. Certainly, the ability to represent them or capture them or back them up in digital form as well as to better protect the physical sites where that's possible and I don't think it is necessarily very possible when you're talking about things on the scale of a city is clearly going to be a more important thing as we look at the agenda for the next 20 or 30 years. This is something that's going to creep up on us slowly but it is, I think, a very real thing that needs to be factored into our planning. I want to shift and talk a little bit about developments in scholarly communication which is something that's very central to our agenda and there are clearly a lot of things afoot. Some of them, interestingly, are things that certainly we here in the United States are playing a pretty minor role in. For example, the discussions around Plan S coming out of a group of funders primarily in Europe although also some multinational research funders like the Gates Foundation. I think it's way too early to understand the economic implications of that or how effective it's going to be but one of the little things that I seem to be seeing buried in some of the implementation details which may be as significant is that this looks like it's raising the technical bar for deposits of articles for the public versions. For example, it's specifying XML with particular DTDs. This is going to be very hard on small players in the publishing process. I mean, this is not a problem for any of the big publishers be they publishers that operate in open access models like PLOS or the big commercial players. It's really not clear to me how this is going to work out for labor of love journals that are mounted on smaller platforms, things like OJS because this is not a software problem. This is a some human needs to do markup problem in order to make the materials go into the right form. So I think that again we need to be mindful of a number of the side effects that are built into the details of some of this. Another area that I just want to flag as something of great concern without digging into it is the continued issue around the capture of impact measurement and the quantification of impact measurement of scholarly work in various kinds of proprietary and non reproducible ways. I think that is a very troubling development that we need to think deeply about and the longer term implications of that process are I think quite troublesome. But let me move from things that make me nervous to things that give me a lot of hope and also point at some reconfigurations in the landscape that bear some consideration. So one of the things that we've learned over the years is that when journal editors in a field get together and change the rules or clarify the rules to reflect desired behaviors within a community they can be a really powerful force for change. For example, some years ago you may recall that all of the journal editors that published in genomics related areas started saying well, you know, if you're talking about gene sequences I don't want to hear it unless it's got a GenBank ID. You got to deposit those sequences or we're not going to take your paper. And that basically codified formalized the practice of deposit. I want to note that most of the earth science publishing community very recently under the auspices of the American Geophysical Union with funding from the Arnold Foundation has basically come up with a consistent set of directions to authors that basically talk about data sets, the availability of data sets, the need to place data sets in repositories prior to publication. It also as a byproduct kind of conclusively moves the publishers away from this kind of ambiguous flirtation that they've had over the years with supplementary data that they have been unclear about how long they would maintain or what preservation, you know, kinds of assertions they'd make about. And it moves this squarely to a repository system that exists in, you know, complementarity with the journal publishing system using linkages like DOIs, data site, and similar things. The other thing that I think this implies, at least to me, is that when we look at data, at least, external repositories are going to play a much bigger role than institutional repositories probably. And it seems like there's a lot of reasons for that. One is the complexity of the way repositories are going to fit into the discovery, publishing, and citation kind of relationship landscape. There's a lot of apparatus there that is going to be really expensive for each institutional repository to reproduce. The other, and we can see some very interesting models of this occur. I've been talking recently with California Digital Library and Dryad, which is doing exactly that kind of an alliance, and I think a number of other institutions are starting to look at following that sort of pattern, at least for research data. Now, it may be that there are other purposes you want your institutional repository for, but I think that there's a lot of impetus to go to more central and often disciplinary repositories for data. The other piece of this thread that I want to just note is that as we talk more to faculty, and we just had an executive roundtable on this this morning, faculty are starting to express a considerable preference to external systems rather than institutional systems, because if they need to move from one institution to another, and their materials held in an external system, that move is pretty easy. If they need to take everything out of one institutional system and rehost it on another system at another institution or replicate it, that is a lot of work potentially, and it's work that we don't give the faculty, members, and researchers a lot of help with right now, by the way. I would also draw your attention to a lovely report that came out recently from Ithaca SNR, titled Scholars Are, with R in capital letters, Collectors, which really looks at the pattern of scholars building up collections that they use to support their research and teaching on a personal basis. And most of these collections now are digital. And again, rehosting these is a big problem. It's a lot of work. It's time consuming. It's work that takes away from the actual scholarship that people want to use this material for, which is, among other things, why you see these fascinating phenomena where somebody moves from institution A to institution B to institution C. But they're still using a website on institution A, and nobody really wants to talk about why that site can still sit there and how this is all being worked out. And somehow it just goes on like that. But it's because nobody wants to invest the time and trouble in moving it. So I think that we're seeing some signs here of a very clear kind of trajectory for where a lot of research data is likely to be moving. It doesn't, by the way, solve the sustainability problem. That's still very much of a challenge. But it moves the locus of the sustainability problem. The last thing that I want to talk about in the area of scholarly communications, and I'm really intrigued by what's going on here, is the question of monographs in the digital world. There was a slogan that came out of scientific article publishing back when Force 11 was established. Probably almost 10 years ago now. Beyond the PDF. Basically arguing that somehow we'd needed to find a future for journal articles that was something beyond reproducing a print model that's been with us for 100, 200, 300 years. Somehow that didn't really happen very well. We still produce scientific journal articles that look a lot like print articles, except that they have clickable citations. And now they have links to data. And sometimes they have tables you can click on and expand. There is a lot of interesting conversation and experimentation about how to move past that, but not a lot that I think has been genuinely established yet. Part of the reason there is the pressure for fast publication. The cycle time for doing something different, the time scale, might be better suited for monographs as a first place or for long form discourse than for journal articles reporting the latest scientific results. There has been a flourishing of activity around new publishing platforms, specifically or certainly clearly targeted to support long-term works. I'm thinking of work like Fulcrum at the University of Michigan, Pub, Pub at MIT. There are others. I think one of the key issues we need to deal with right away, if this is really going to take off, is we've got to be able to tell the story of your work, the author will be preserved. It will be preservable and it will be preserved. To my view, that was the critical thing that allowed the transition away from print journals with an electronic copy in addition to the print to electronic only. It was the establishment of things like clocks and portico and other deposit arrangements that made people comfortable with that. We need to answer those questions for these future monographic forms. Now, there's some very interesting pieces of this. We have the EPUB 3 standard. We have a lot of conversations, at least, about whether some of the preservation services can handle EPUB 3 or perhaps some subset of EPUB 3 or profile of EPUB 3 as a preservable object. Now, let me point at another really interesting phenomena. And again, I would argue, think a few years out and think about scaling up here. So we've seen two fascinating announcements this fall. One from the University of Michigan and one from MIT. Both of them come out of the University Press at those institutions saying, well, we would be happy to license our collection to other universities. Perfectly reasonable. It's not clear why we need complex middlemen in this operation. Why not do that at a very good price if these collections of content that are important to the academy and primarily targeted at the academy. But here's the strange thing about it. As I understand it at least, the proposition at Michigan is, well, we'll give you all this material on fulcrum. The proposition at MIT is, we'll give you all this material on pub hub. Do you see a scaling problem here? This starts to feel a little bit like the proliferation of journal sites and then will we need a discovery layer? What's the lingua franca of transfer? It was the PDF in the journal article world, basically. Everybody would let you download a PDF. Is it going to be EPUB three files here? I don't know. But I really think that as we see these developments getting ready to take off, it is time for a sort of five-year architectural visioning of what does the system, the broad system for digital monographs look like. There are huge numbers of exciting point projects to create this content and to deliver it. But it's time for us to take a deep breath and really think about what's going to scale, what's going to be preservable. Now, last set of things I want to talk about. These are programmatic. Some of you may have noticed either in your packet or from the announcement on CNI Announce a few days ago that we released a summary of a meeting that we held in September bringing together IT and library leaders to discuss some of the current areas that are most promising and most urgent for collaboration. Now, let me give you some context on this initiative. CNI, for those of you who go back far enough, was really founded on the notion of being a place for information technologists and librarians to collaborate, to work together. Now, at that time, it was viewed as a very kind of bilateral agreement where libraries and IT wanted to do some things and they needed the expertise of the other to make it happen. The world has changed enormously since the early 1990s. Just think about these things. Think about how the locus of information technology expertise has changed on the campus, how it's diffused, how expertise has moved into various units, including the academic schools and departments, into the library, into individual grants and funded research projects. Think about the way in which at many campuses there have been periods, particularly around the Y2K problem and the introduction of very large enterprise resource systems and the ever-growing compliance burdens, where administrative computing, driven out of the president's office, out of the board of trustees, often out of some problems of one sort or another, have really dominated the agenda and focused central IT very heavily on those sorts of things and on the provision of core infrastructure. And in response to that, various other groups have picked up a lot of the more specialized research IT and academic IT. We've also seen the emergence of a whole assortment of groups that have something to do, and it varies tremendously from campus to campus, with learning management, instructional technology, pedagogy, teaching with technology. There's a whole assortment of these, and they report in all kinds of strange directions. Sometimes, well, let's just say in many different directions, there's not a lot of consistency from campus to campus here. We've also, in recent years, seen chief research officers and offices of research get a lot more involved in not just policy, but support infrastructure in compliance around contracts and grants, research data management, those kinds of questions. So we have a real different landscape than we had, let's say, 25 years ago. We also have had a generational shift in leadership that is almost complete at this point, and we have a tremendous number of people now coming into those leadership jobs who've only been in them for a couple of years. So what we've decided to do this program year is to convene a series of conversations, bringing together some small groups of ITM library leaders, and we're trying in particular to be sure we get some of the folks who have only been in their position for a relatively short period of time, as opposed to just bringing in the usual, very prominent leaders who've been in there for 10 or 15 years, to really take a fresh look at the situation. That report from September was sort of a first survey of some of the issues, and I found it to be a fascinating discussion and very interesting reading. It's our intent over the course of the next, the rest of the program year, and perhaps extending farther into 2019 to convene at least three more of these conversations. One focused very much on teaching, learning, and student success. One focused deeply on research, and one focused on some cross-cutting issues, especially having to do with privacy, data governance, and related matters. I want to just share two things that came out of those conversations, those initial conversations that I think are very significant. Remember how I talked about the nature of the collaboration in the 1990s being very much bilateral between library and IT organization? It really wasn't about the students or about the faculty so much as it was about getting those two organizations to be able to work together to build various things and services. Now, if you look at the vision of supporting research that comes out of the conversations of September, 2018, and by the way, this was very much echoed in our Executive Roundtable this morning, it's really about how do you take all of the interested groups, libraries, informational technology, instructional technology, the Office of Research, and anybody else that needs to be as part of it, and offer not services that are badged with library services and IT services and research office services, but services to researchers who don't need to understand the details of your siloing, but rather holistic researcher-oriented services. That is a really different kind of a model and it calls for a very different kind of collaborative enterprise, perhaps quite different organizational models before we're through, and we'll be exploring that in a lot more detail. The other area I wanna talk about, and this was one that really was surprisingly fleshed out in the discussions in September, is about learning materials and about how they're selected, how they're acquired, how they're financed. Now, we're familiar, most of us, I think, with the open educational resource movement and certainly that's been quite successful in some settings. It's saved a ton of money out of the students' pockets in a number of settings. I mean, one of the really wonderful things about that is it produces some very quantifiable wins for the students. But there's more going on here than that. Textbooks still are gonna have a place, commercial textbooks, I think. I think we're gonna live in a mixed environment. I tend to, my assumption usually is, unless something really drastic happens, you're gonna end up living in a mixed environment and that's usually, I think, the way it plays out. Now, that mixed environment is very interesting because for textbooks, there aren't textbooks anymore. There are interactive things that we call textbooks because we don't know what else to call them. But they're interactive things and you don't buy them and then maybe sell them back to the bookstore. You license them. They are things that collect data on you and remember that data and can share that data in various directions in more or less personalized forms, personally identifiable forms. And by the way, if you're an individual student and your faculty member has said, oh, well, you have to license this electronic textbook. Go pay these people and get a key. Guess what? You have about the same negotiating leverage on the license agreement as you do if you're acquiring a personal copy of Microsoft Word and you don't like the click-through license and you call up a Redmond and say, I'd like to personally renegotiate the terms and conditions for my license. Not gonna happen. This is, you know, the power imbalance there is just crazy. So we really need to look at how these electronic textbooks are licensed. There've been some very interesting experiments that some institutions have done, by the way, where textbooks have become so expensive that a lot of students aren't buying them, which is bad, especially if the faculty member is actually using the textbook in class. And by the way, you know, there is some evidence out of the SMOER work that's been done that in many cases, they just assign a textbook because they feel, oh, there should be a textbook. They don't really use it, but they just say, oh, go buy this $200 textbook because there should be a textbook that you can study if you want to. And when asked about that as part of, you know, reconsider what you're doing in light of flipped classrooms and OER and other things, faculty members, you know, you're probably right. We could just get one copy of that, put it on reserve in the library and we don't really have to have everybody get this. But in the cases where they're actually using the textbook, we've seen some courses now package things up so that there's a lab fee and everybody pays the lab fee, everybody gets an electronic version of the textbook and they get it at a discounted price because it's negotiated, it's a package deal. So there's lots of issues here. There's issues about cost, but there are also issues about licensing terms. There are questions about privacy. There are questions about who gets access to the data. Does that data go to the textbook publisher? Does it go to the institution? How does it go to the institution? In what form is it kept? Who gets to look at it? Then there's the whole question of, do are we really confident we have smooth delineations and boundaries now between electronic textbooks or OERs on one side and learning management systems and the things that live in learning management systems on the other? I don't think so. And certainly it is absolutely true that in the large course commercial market, we don't have that kind of smooth delineation. I mean a textbook adoption now comes with a cartridge with a teacher's guide, problem solution sets for the TAs, sample tests, whole pile of material. We really need, I believe, to think about this holistically and talk very carefully about what roles who's gonna play in acquiring this material, paying for this material, what the interconnections are gonna be, what the rules of the road are gonna be, who the actors are gonna be. This is gonna require some major institutional commitment and restructuring. And in some cases there are some pretty daunting built-in impediments like, oh I don't know, bookstores which report to a vice president for auxiliary enterprises and of course the bookstore historically hasn't had too much to do with the core mission of the university, things like affordability and student success that are major institutional priorities and now all of a sudden it does because they've got a long-term contract that blocks progress on these other issues. So I think that there's really an enormous opportunity there and the striking thing to me is that this really does connect to fundamental priorities around student success, affordability and related matters that are really important at a lot of our institutions. So let me sum up where we are here. I think that we've had some really good examples here of understanding what we can and can't accomplish with new technology but also remembering that we need to be clear about what problems we're trying to solve and we also need to think at least five-year time horizons and always ask the questions about will this scale up? It's fascinating to me to see how this question of what problems are we really trying to solve? Keeps coming up again and again at every level. We actually are seeing this at sort of a meta level now about well what problems are higher education and post-secondary education trying to solve? What needs? What purposes? And that certainly is a little bit beyond our charge here but the same kind of framing is showing up at all levels as I look at what's going on. I hope if nothing else you come away perhaps with an urge to make some roadmaps to try and think about what would a five-year future look like and what are the steps along the way? Where do we, how do we want this hunk of the world? Whether it's the acquisition selection management of learning materials or the world of electronic long-form discourse. How do we want those to look? And what are the paths to get here? And when do the decisions that we're making now start us into places that aren't gonna be sustained? I hope that these comments this afternoon have given you some food for thought in that area and I think you will see a tremendous amount more throughout the meeting. Thank you very much. We are close to at time, I'll do two questions if they're not really, if there are two and if they're not really long complicated questions. Or there are questions that I can't answer without really long complicated evasive answers. Sir. Charles Watkinson, University of Michigan Press and the PI on the Full Primal Project. Your comments about scalability of these monograph platforms were very helpful. But I wonder if actually the way in which they're embedded in other communities and their use of standards and their use of common discovery mechanisms is doing enough for the short term. So making sure they're discoverable by the usual indexes, making sure they're deposited into usual preservation networks, making sure they use DOIs, et cetera. It may very well be. But what I'd say is that if we're gonna make a bet that way that there's gonna be a fair number of these, then it places a certain emphasis on the definition, adoption and adherence to appropriate standards in the right places to make it all work. Whereas if you just have one or two platforms, it eliminates a lot of the standards issues. So I'm not saying that what you're describing is an impossible future, but this is exactly the kind of thing I'm saying we need to think five years out. And if this is the road we wanna go down and I believe there are some arguments that this is not a terrible road to go down, then there are a whole series of things we should be examining to make sure we've got the right pieces of infrastructure and standards and practice in place because those things have a certain development time associated with them. Very much like swapping out a potentially vulnerable in the future crypto system has a really long time horizon on it. The other thing I'd say is that I think we need to think very carefully about what that says about the affordances of platforms as opposed to eBooks. Because it's, I think to me at least, it implies that you're embedding a lot of the annotation and other affordances in the environment around the eBook rather than in the eBook itself, although I'd like to see that very carefully analyzed and tested. But I think that those are the sorts of things we need to really have a very clear picture about in our heads. One more, people are ready for a break. Okay, well I'm so delighted you're all here and I hope you have a wonderful conference. Thank you.