 Thank you. The agenda for today's meeting is going to have three parts. I'm going to do a general update on ARL happening and assessment activities. The second part will be some of the latest findings regarding our evaluation of the last library assessment conference and some of the plans we have for the upcoming library assessment conference and Steve Heeler, the co-chair of the library assessment conference, will do that part. Steve is sitting over here for those who don't know him. And then there is a third part to today's event that was sparked by interest expressed is the ARLSS discussion list with questions about the iPad Academic Library Survey, the US National Survey, that the National Center for Educational Statistics is implementing, and number of definitional issues that you are all struggling with, and we have a way to communicate some of this information to them. That part of the discussion will be facilitated by David Larson from the University of Chicago, who will be coming from another meeting that he's attending right now. Thank you. So I don't know how many of you are aware of this past year. We lost our colleague, Julia Blixtrup, and it has been a great honor to work with Julia. She was going to be at the library assessment conference in Seattle, but she wasn't. She's been at every other library assessment conference. And I think we're still going through understanding what that means for us. I know the ARLSS Board will be discussing what is appropriate to do for Julia and Elliot Shore, our executive director, and Sue Bochman, our deputy, will be forwarding some ideas to the board. Just be aware that we have ideas about a potential scholarship fund and things like that, so you'll be hearing more from us on that. And also just to say how blessed, really, we all are for having Julia in our life. Thank you. She always smiles. And I always try to open this meeting by giving a sense, we always have some people who are new. And oftentimes, they want to ask ARL, okay, I've been to an ACRL meeting and this is the same thing. And I always try to have a little bit of a sentence there, outlining what's the difference between ARL and ACRL, they are two different organizations. ARL is an institutionally-based organization and our member libraries are 126 research libraries in the US and Canada. And our headquarters are in Washington, D.C., and we overlook our wonderful DuPont circle. I hope if you are in that area, you'll come to visit the offices, the 21 DuPont circle. And the membership covers US and Canada. And I'm really glad we have with us today more than one ARL director, Bob Fox is the chair of the Statistics and Assessment Committee and he's always been here at our meeting. But we also have with us today Martha Whitehead from Queen's University, am I missing any other ARL directors? I haven't seen them yet, thank you. And it's actually good to have Martha here because I am talking a little bit about where we are in the strategic thinking and design process, so you may want to say a sentence or two, I don't want to put you on the spot on that. It'll come, yeah, I have a slide with a framework there. So the typically, traditionally, historically, ARL has focused on strengthening research library performance and influencing the scholarly information environment. And the portfolio Julia was leading had to do with the scholarly information environment and it's a very vibrant portfolio that, as all of you know, it's changing with the scholarly publishing changing. So I can walk you actually through this a little bit more. Right in the middle of this picture, it talks about the core activities that Statistics and Assessment is engaged in. It's about describing member characteristics, roles and contributions. And clearly with a lot of the data we are collecting, that's what we are trying to do. And why are we trying to do that? So that we can articulate and represent research library interest so that we can advocate. That's what it says over there, advocacy. And so that we can monitor our environment. And this aims at enabling action that are organized through different membership forums and inform and mobilize our member libraries and also enable institutional responses. So as, for example, we are engaged into, oh, wonderful. I heard a noise, but it's going to block the sound behind me. Thank you. Very good. So you can actually see that a little better now. So again, a summary of describing ARL roles. We are in transition. One of the major elements of our transition in the office was our big renovation. Last Christmas, New Year's were about four weeks. The office was closed. Here you see the before and the after. As part of this sort of picture that describes our roles, I was going to mention an example. Like the facility's inventory, for example, that we collected the way of describing what's happening to member libraries. Part of that work has been featured at the different membership forums we've had, even at the library assessment forum. Again, we are trying to find out best practices amongst our member libraries and that's driving the information exchange that we're engaged in. I think beyond our member libraries, though, we are part of the larger university system. We always try to integrate with what the universities are doing. And a part of our role is to monitor all that environment. So as an example of how the facilities work, for example, fits in this circle and you can see that with other aspects. Yeah, we have also Mike here. You cannot see it there. You can see that with other aspects of the ARL agenda. I think I'm trying to do too many things, but it's a first. Trying to record it, for posterity, trying to project, trying to block the sun, which sounds successful. So we are in transition and this was a picture of our renovation. And if you haven't seen on our website, there are pages about our strategic thinking and design process. It was a series of regional meetings that engage 300 class people. There is a report that the membership has varied and there is a product, an online product, that will be publicly available in the coming days. And that's the cover of the report that was completed in August. It was discussed in October at the membership. And the committees also discussed how they see transitioning into this new environment. Part of this report has this framework that is the basis, is the foundation of what we are being called to build upon. And it describes a number of key roles for ARL, the context for research libraries, essential capacities. ARL has always had a very strong advocacy policy role that is maintained as an essential capacity. Assessment has been a key strength of ARL that again is maintained as an essential capacity. Communications and marketing explicitly stated there, always as an association, had a very strong communications element but it's explicit now as an essential capacity. And aspects of being an incubator, Spark and CNI were examples of efforts that serve as issue incubators. Of course our core membership is an essential capacity and partnership. We're in an environment where all our member libraries are being called to collaborate more and more with different institutions and we are also being called to do the same. So beyond our essential capacities, what is driving us I think into the future are those five areas that have been termed as a system of action. And these are ARL initiatives that extend, they are grouped into two areas. They are initiatives that extend beyond the library context. These are the first three areas. And the last two initiatives that are within our community more close to the library world. And so the areas that extend beyond our community are expressed as collective collection, the scholarly dissemination engine, the libraries that learn. I have to do a lot with business intelligence and analytics. Again, assessment has a strong element there. And in terms of initiatives that are important within our community, the concept of ARL Academy. Am I in the right room? Should I be in the other room? In the other room right now there are meetings happening related to our diversity and recruitment work. And my colleague Mark Puente is bringing a number of scholars, young scholars together in a three-day institute. And the last concept of the Innovation Lab, what I think like every innovation is to be defined. So we look forward to see what shape that will take. Marcelle, since you are here and you are a member of the transition team, would you like to say a few words? Thank you. It's really great to have you here. So let's take a little bit of a stalk of some of the programs and how they have developed. I thought it's a good time to sort of see where we are now and that will sort of push us into the future. ARL historically supported the collection of statistics in the salary survey, I mean even before ARL got established. But the last 10 to 15 years we did experiment with a lot of new measures, development of new measures, and one of them, the Life Hall, gave us the capability of supporting some technology infrastructure that made the offering of these tools to the library community even beyond the ARL membership. Now while this is happening, we continued to have consulting services of developing performance assessment efforts with Jim Self and Steve Schiller, traveling to a number of ARL libraries and articulating how assessment activities in those libraries can be strengthened as part of that effort the gathering of this community in the form of the library assessment conferences came about starting in 2006 and continuing strong these days. We also experimented with a performance measurement framework called the Balanced Scorecard which continues to actually be of interest to many ARL libraries and continues to be a way of planning for many other libraries. Can I have a showcase of who's using the Balanced Scorecard framework in this room? And I will name a few of you. Florida State, MIT, Ohio University, Washington University, San Luis. Thank you, thank you. And we know of at least about 20 ARL libraries that have engaged with this framework and University of Washington had a version of this framework at some point. We've also did some work with scenarios planning. I was mentioning it to a couple of colleagues this morning because they were saying how their organizations are in transition and they need to have conversations about the future of the research library and what that looks like. And they were not aware of this report that took about a year to formulate where four scenarios about the future of scholarly work and how that gets disseminated are presented. And the actual final PDF publication has a number of exercises that you can use in your own institutions to have discussions about that future and what that means about the library. ARL itself did a collection of profiles, five-page narrative descriptions that was a way of moving beyond numbers into providing more context about the changes that are happening in our libraries. I know a group of CIC assessment librarians has expressed interest in updating that work. And I see one of them back there, Eboni, who actually brought that idea to the phone call of the CIC assessment librarians have. Now, what we have spent a lot of our time and energy the last couple of years has been the assessment of special collections and facilities. And some of that you've seen through the Facilities Inventory and we have a forum that we are defining a little bit more about collecting special collection stories that we'll make available to in the coming weeks. Looking at it both, our phone calls have been pretty busy with the various agenda items. Now, a couple of other efforts that have happened during this time frame but have not, you know, they are smaller in scope. The climate call effort, which is an internal organizational climate and diversity survey, a number of you have actually engaged that protocol. And the Minds for Libraries protocol, which is a way of capturing the perceived value of the downloads that are happening from the various electronic resources and services. And we do have a couple of libraries right now in the room that are using the Minds for Libraries protocol. And I see Michael Masiel from Texas A&M shaking his head there. Anyone else that I'm missing right now? A couple of other institutions have actually, are actively collecting these data now. We did a three-year IMLS grant with the University of Tennessee and the University of Illinois at Urbana-Champaign to move beyond just tools, to identify multiple approaches and multiple ways. Instead of, you know, people I think at some point were feeling prescribing, you know, ARL prescribing like climate call or Minds for Libraries. We wanted to move beyond those confines. And we experimented with the lead value work on developing methods in about six to seven areas that are focused on teaching and learning, special collections, information commons. There has been a series of webcasts that has captured that work. And we are actually launching a toolkit that will allow this kind of work expanding the gathering of information among four different approaches and methods. It'll expand that gathering in a more systematic way through this toolkit. I see it's coming out before May. And so what's next? You've seen in the framework the concepts of libraries that learn, and that's a very rich concept to be defined further and clearly some of the activities we are currently engaged in will be there. We have a new exciting opportunity with an upcoming visiting program officer who forwarded a proposal to the Statistics and Assessment Committee in October to do a longitudinal analysis of the salary survey. I call it from Brigham Young University, Queen Galbright. And I want to thank Jeff Belliston, who was here and who was on the phone, and we defined this work and it's about ready to start in the next month or so. We know that a lot of the data efforts I've mentioned result in service improvements in libraries, and we try to make space for that to come forward to everybody's attention. We know data management and data visualization are important. We had a panel at the last library assessment conference where Sarah Murphy, Rachel Wellen, and Jeremy Buller presented on that topic. As a follow-up, we are offering three webcasts this spring and they are listed on the event flyer that's out there on the table. There is a fourth, yes, thank you, someone who has a copy of the event flyer says there are four entries for that series of webcasts. The three webcasts are designed to be half-hour webcasts where each one of the people I mentioned presents on how they use Tableau for data visualization and data management purposes. And then we have a one-hour Q&A session with all three of them to fill your question. So it's a series of four webcasts in that sense. And last but not least, we have gotten a new grant on institutional repositories and digital collections on the ways to assess those elements, and I do want to say a couple more about that grant. We are going to start with a needs assessment. What is the primary purpose served by your institutional repository? We are defining institutional repositories and digital collections in a broad sense, not a specific of software platforms that present some challenges, but at the same time allow for institutions that have multiple platforms to take a broader view of the needs assessment and the assessment that needs to take place for these resources. I have a quote from Bob Fox. I was on the press release of that grant. Do you want to read it? It is critically important for institutions to gain a strong understanding of the value of institutional repositories and digitized materials as these resources represent a large and growing part of the utilization and public awareness of our collection. Some of the initial exploratory work we are doing here we are identifying the need for using, how can we use the development of IRs as a fundraising tool, for example? How can we market the different collections we have more effectively to the communities that care about them because those communities tend to have disciplinary boundaries? For example, I do want to remind that back when we collected, we did the celebrating research volume that featured a special collection from each library. At that time, MIT presented as their special collection their institutional repository. There is a very interesting convergence there. So this is to say we are transforming. Steve, your turn. You have to wear this headpiece. Other ways the voice is not captured. I know it's not pretty. At least, yeah, if you did, don't hesitate. You just put it in here. I had one out because I wanted one of my MCB3. But you can put it on both here. You are muting yourself. One left? Yeah, that's what, hey, it feels like you are muting yourself from the start. And the mic is on this side. This is important. Left one. All right. Good afternoon. Can you hear me? I can't hear myself. I'm Steve Teller. I'm at the University of Washington. I've been there a really long time. I hope you all have your favorite viewing spots for the Super Bowl game on Sunday. I will, of course, fly back tomorrow because I want to be part of the riot that occurs after we beat the Patriots on Sunday. All right. So, Library Assessment Conference, and I'll look back. Mind if I sit? Is that okay? Thank you. Basically, the goal of the Library Assessment Conference is to create and maintain a community of practitioners, assessment practitioners, ones where we can learn from each other and inform each other, both in structured and informal ways. And that's basically it. They grew out of, as Martha noted, ARL efforts in the earlier part of this millennium dealing with lime fall, as well as the effective, practical, sustainable, no, effective, sustainable, practical assessment program that Jim Self and I ran. Out of these, we saw a need to keep this community engaged and to keep the community growing, not just for the sake of the community, but for the sake of the library and the institution. And, of course, I think we've all seen in our institutions a real increase in interest assessment, particularly dealing with learning outcomes assessment, but also in other areas in evaluating the research enterprise, making sure that we're doing things as efficiently as possible, and so on. So, the conference began in 2006. In 2008, we started our Library Assessment Career Achievement Awards, and this is from the 2014, where the awards went to Jim Self, Joan Stein, Brittany Franklin, and Fred Heath. And there were the presenters. I tried to wear the same clothing that I did then, just to get into the mood, and so I've got the shirt, but it is winter now, so we're in Chicago, so I've got a sweater. This is a slide we showed at the Assessment Conference. How many of you attended the Assessment Conference? Wow. Make sure... Yeah. Well, I don't know who to give the discount to the people for the next conference. The people have already attended the people who have it. Well, maybe you can just give them all a discount, Martin. Yeah, okay. It's interesting that the participation in the conference, as we may be relatively stable over the past, certainly past few conferences, that actually the majority of registrants do not come from ARL. ARL has hung steady since the first conference at between 40% and 47%, something like that. So there's a lot happening in other areas of libraries, both in public libraries, other organizations, vendors, and, of course, through our college libraries and community college libraries. So, of course, we want to recognize that and get the full range of assessment that's occurring, particularly in higher education. You can see that really where people are coming from is pretty stable in terms of US, Canadian, foreign. And then depending where the conference is held, that's where we have the greatest number of attendees, which kind of makes sense. You don't need Tableau to do the data visualization on that one. We know it. The conference structure changed dramatically in 2014. This was due to a number of comments. We got some of the evaluation in 2012. So there was a heavy reliance up until 2014 on more formal academic papers. As you can see there, the totals were around 70 per conference. And then dropped down to 38 in 2014. We reduced the number of posters quite a bit because people said there were too many posters for them to really be able to engage the poster presenters and to see all the posters in one poster session. And then we added Lightning Talks. We got 58 of those, 56 were able to show, and then panels, which were new. So we added two new categories. We've always had really around six or seven workshops. These are pre- and post-conference workshops. Just as a point, you can see the number of registrants with assessment in the title was nearly 100 in 2014, out of about 600 people attending. So we get the core group, but we're also pleased to get other people who are involved in assessments, who are interested in assessments, who can champion assessment at their institutions. The theme sessions, probably the biggest difference, as Marcia mentioned, we had three sessions dealing with data and data visualization. And so that was quite a change. And as well we see, and obviously played some role in using those participants in the webinar series. All right, conference rating. I love these charts. This one actually just goes from three to five. But you can see that really for the last three conferences, the overall conference rating has been the same. It's been about 4.4. This is from anywhere from 250 to about 280 respondents. That's a pretty good score. I mean, it's okay. We keep making changes. We'd love to be higher. But I think the stability of that indicates, and of course the increased attendance, indicates the value of the conference to those who attend into the broader community. Quality of presentations. This is kind of interesting. So you can see 2012 in blue and 2014 in red, that the plenary's got a lower response. It was almost a bimodal kind of response. People either loved or hated a particular presenter. I wouldn't say hate. Hate is too strong an emotion. But yeah, there was a lot of, you know, people really liked one of the presenters, others. Didn't like that same presenter. Put that in the comments. And so if this goes to show you, you can't please all of the people all the time. That's a Kentuckian who said that. It didn't be safe. I don't know. Anything else on the quality? Posters went up, which is good. We thought that we'd have higher quality posters, stepping lower. Panels were low, but again, it was a split. There were some panels that got a lot of praise, a lot of positives, and others that didn't. And this was the first time we did panels, most of which were submitted, but some that we put together. And finally, the lightning talks got a pretty high response rate as well. So we're pleased with that. Range and usefulness of the presentations, you can see that the lightning talks and the posters, because they actually represented what people were doing at their institutions, practical stuff, again, got high ratings for both of those areas. And then conference logistics. You can see there were some substantial changes between 2012 and 2014, and that reflects the type of venues that we had. In 2012, there was one primary hotel, but there were a couple of others that were used as well. Lodging got much higher this time around the university. There was a split between a hotel space, which isn't sufficient, and new dorms on suite, and all of that, which can be a quiet place for some people. Meeting rooms. We've heard a lot about meeting rooms at the conference, a lot of Twitter. And so there it goes. Meeting rooms were rated really low, and that's because partly we didn't anticipate. We had to guess which rooms would get most more people, and we had a couple of large rooms and a couple of smaller rooms. And it turns out that the lightning talks were the most popular, and so all these people were crammed into what seemed to be an earless room on a very warm day and spilling out in the hallway. So the message has been acknowledged. We've got that one. The workshop rooms got really high ratings. We were able to use some of the new active learning classrooms in the undergraduate library, and I think people really enjoyed that. And of course, we spent more time on the receptions than we actually did on the program, so we're pleased to see that the receptions, particularly the post-reception and the conference reception, rated higher. In fact, holding those outdoors, at least the conference reception, made quite a bit of change from Charlottesville where that evening, a little further up the coast, Sandy was hitting somewhere in the New York area. It was just drizzling and cold in Charlottesville, and it was indoors. All right, so how could we improve the conference? These were comments that were made by people. We need to have meeting rooms that can handle the number of people that are appropriate for the topic. We have to do a better job of kind of planning where we expect to have the maximum numbers or just to have larger meeting rooms so that it really doesn't matter at times. Reduce number of presentations. We reduced number of presentations certainly on the paper side quite a bit. People still felt for three days there was a hell of a lot going on here, and they weren't able to catch everything. Be more selective on panels. Yeah, we need to do that. I think there's value for panels, but we have to put a little more care into those. In some cases, we may take a more active role in forming panels, particularly ones that are more summative in nature. Lodging should be closer to meeting area, particularly one hotel. Don't grow the conference any larger. People felt it was pushing the upper limits even then. Folks like keeping the late afternoon hour open, we tried to end it three each day so people could take advantage of getting out and join Seattle in the summer. Our foreign registrants really liked the fact that they were totally jet lagged or whatever, that they could just go back and take a nap in the afternoon. That was good. Moving to 2016, we're going to be at the Crystal Gateway Hotel in Arlington, Virginia. The conference is scheduled October 30th to November 3rd. The 30th is sort of workshop days. The conference will be in the 31st and 2nd. It was not easy to find a place in Washington, D.C. during that period of time. Everybody had one week open, and that was the week of the election. We kind of heard from people that they didn't want to spend election away from home, that they wanted to be able to vote and so on. Of course, Halloween is also a home thing, too, for some people. We'll do something about that. We do plan to have a Halloween party either on the 31st or the 1st. You can dress up as your favorite politician. The election is the following week. Come as your candidate. State-wide candidate, presidential candidate, whatever. Format similar to Seattle. The hotel is close to national airports on top of two metro lines to Washington, D.C. And we have 350 rooms. So there should be plenty of space for people. Just to give you a sense of the meeting rooms, you can see that the room for the plenary sessions will hold about 600, but we can split that into two, so that's 300. And then we have small rooms, but the small rooms can actually be combined into twice that size. So we have a lot more flexibility with the space than we did last time. And we've appointed a steering committee, and there you can see the names. And let me just, you know, we need to get on. Just the preliminary timeline. So the cough over postal, so I put February 10th. It'll be somewhere between late January and mid-February. They'll be due around the beginning of April. Submitters will be notified towards the end of May. We'll open early registration for the presenters and general registrations in 15th, and registration closes on September 15th if there is space. And we've run out of space every year. So just to let you know that it's not Arlington, Illinois. It's not Arlington, Washington State. It's Arlington across from Washington, D.C. So we will be in the D.C. area, and people can certainly take advantage of everything that offers, again, two metro lines right there. Very easy to get there and back. Any questions? Okay, well, hope to see you there. I have several questions, or any other issues you may want to raise before we go into... We're on the 31st floor. We'll see you in a minute. We'll see you in a minute. We'll see you in a minute. You know, we're not ready to go. It's a workshop. It's a workshop of the State Department School where you're over-subscited. I'm interested in having you in the library, having you in the trash, and everything. It's a lot of work. It's just great. It's just great. Here's a conversation with two of our students who are in the classroom. That's fine. And go to the topics you wanted to bring before we ask David Larson to help us. And he's calling me. Elizabeth will let me see those slides maybe. Thank you Amy. David Larson and Elizabeth Edwards. Please come forward. I'm going to be able to speak to that mic so you get to speak. More stuff. We have two chairs over there. I don't know how we're going to work it out. What do we have to do about that? That's hooked on the phone line. That's recording the person who speaks. Put it in my ears. Both of them? That's the mic. So that's the most important part. The last one. But yes, it's for both if you want both. It's just one is better if you want to use one of your ears. All right. Hold it to my mouth. I don't have enough hands here. All right. Can you hear me? Do I need to talk into this thing? Maybe we can hear you fine here. Thanks. Perfect. Wonderful. All right. So I'm David Larson. I'm head of access services and assessment at the University of Chicago Library. And we're here to talk about the new academic libraries component. That's part of IPEDS. Starting this year, I'm just going to hold this near my mouth, I think. Starting this year, the NCES academic library survey that many of us had been filling out has now been integrated into the... I can never remember what IPEDS stands for. That's why I have it out here. The post-secondary education data system IPEDS as a new component, the academic libraries component. The big differences are that it's going to be collected annually now, each spring instead of every other year. It's required any library that any university that grants the degree has to fill it out. And the data collection is probably coordinated by your academic institution rather than the library. So we're having to work maybe with new people and it's due in April. None of the questions are optional. So they're telling us if you don't know the number, you should guess. So that's interesting. There are two sections. The first section has to be filled out by everyone. The second one is for any library that has expenditures of more than $100,000. The first section is all about holdings and circulation, both physical and digital holdings. Can you see? Now can you see? No, still. This is going to break the whole thing, I can tell you. What about now? Yay! Sorry. So first section, I don't know if you can still read this even though I'm not blocking it, but it's asking you for the number of physical books, the number of databases, well databases will be digital on the number of physical media, library circulation for physical, and then your digital and electronic books, your digital electronic databases, your digital media, and your digital library circulation. The second section, which Elizabeth is showing you, is all about expenditures and interlibrary loan transactions. We're going to focus on section one here, which is where most of the questions have been. Section two stuff is a little more adheres to what we're doing for ARL stuff and hasn't, to my knowledge, generated as much controversy. So physical books, we're supposed to report all classified catalog volumes, include print photographs, musical scores, government documents, and serials, but exclude microfilms, it says, not microforms there, maps, and non-print items. Section two, physical media, we're supposed to do all classified catalog audio or visual materials, including sound recordings, motion pictures, video recordings, and graphic materials, but again, we're supposed to exclude, and this time it does say microforms. So one question people have been asking is, are microfilms, microforms, counted at all? It sounds like not. The librarians are asking about whether their map stuff is counted at all. Are they put into graphic materials? I don't know, but it's a stretch, but maybe they're not counted. When it comes to physical circulation, we're supposed to report circulation of physical items from our general and reserve collections include books and media, include both initial checkouts and renewals, and we're supposed to exclude device checkouts unless those devices have books on them and you're using them to circulate books essentially. And then include only interlibrary loan transactions where items are borrowed for users. And that was the thing that I think got me here today because I had no idea what that statement meant because I manage interlibrary loan services, and when I hear they're borrowed for our users, I think we're talking about borrowing transactions where we're taking things from other libraries for our users. But the whole point of this seems to be to talk about the circulation of your general and reserve collection. So that just seemed really strange. If we go to the next one, yes, there we are. I think that we really have to be thinking about lending transactions here, and I think what the intent is to include only interlibrary loan transactions where items are borrowed by other libraries for their users essentially. So if that's indeed the case, then it's really lending transactions we're talking about. And this seems to be the case when we look at a fact that they provide for us to the next one. There's a fact that asks explicitly if interlibrary loan transactions are included under physical circulation, is this a duplication of data reported under interlibrary loan services? And the answer is, there may be some duplication, but the intent of the two data elements are different. Total physical circulation is a measure of how much the collection is lent out to users. While total interlibrary loans and documents provided to other libraries is a measure of how much is lent out to other libraries, the latter can be considered a subset of the former. I don't know if that helps, but it seems to me that they're essentially agreeing that it's lending transactions that we're supposed to count. But I'm still confused because when we go to the definition for the interlibrary loan lending in section two, it's telling us to include both returnable and non-returnable items. And that just seems really weird to me to have all of our scans and copies put in with our lending transactions as if they're our physical circulation. We did clarify this with the help desk for iPads, and essentially that's what they told us to do. So I guess that is what we're going to do at Chicago. It just seems somewhat strange to me. So if we go to the next slide, it seems to me that what they want us to report is our ARL value for initial circulation, plus our reserve circulation, plus the renewals that we have, plus our interlibrary loan lending of originals, and then plus our interlibrary loan lending of copies. We go to the next one for Chicago. That means what we start out with for ARL, 234,328 circulations, more than doubles when we report this to 594,321. So that's the number we're going to be reporting at Chicago. It is mostly renewal transactions. And as I think we all know, we all have different renewal policies, and if we say you have to renew every week, we're going to suddenly increase our circulation an awful lot. And I don't know how meaningful this number is as a point of comparison, but this may be the number that our provosts are more likely to see because it's a university-wide survey. So with that, I'm going to turn it over to Elizabeth to talk about the digital components. Okay, now I'm going to see if I can hold this and turn pages and all. Hooray, David's going to turn pages even better. I was asked to talk about the digital and electronic books and circulation issues, which are possibly even more difficult than the physical volumes. Digital and electronic resources are not my area of expertise, so much of what I'm going to share with you is shared by our electronic resources librarian to give us really wonderful detailed notes about exactly how difficult it is to answer these questions for our collections. So for digital and electronic books, the definition is to include both licensed and unlicensed e-books to exclude serials to include government documents, include e-books that are held locally and accessed remotely, but not to include e-books that are available as part of the database. So to determine, make that the definition of how to determine whether your e-books count as a database or as e-books, and after discussion with our electronic resources librarian, one of the challenges that she brought out in this definition is that this excludes those instances where we buy all of the e-books from a publisher. So the digital equivalent of purchasing on approval is now not counted as e-books in this definition. So there's seriously large numbers of titles that we're not going to be able to count as titles because we purchased them in a different way. You can go to the next slide. So then counting your e-books is also contingent upon how the e-books are used. So depending on the number of concurrent users you can have for a title, a single e-book could count as one. It could count as 10. It could count as 100. Or if you have unlimited users then you count it as one. So this starts to bring to light some of the challenges that we're running into in trying to make sense of how to talk about these e-books. Go ahead. Go to the next slide. Digital and electronic databases, that's pretty straightforward with that one odd inclusion of e-books purchased in that particular way. So report the total number of licensed digital electronic databases in your collection. Go to the digital media. So digital media reports the total units of downloadable media materials featuring video graphics or sound including streaming media and graphic materials the library has selected as a part of its collection. Again, some items that we might consider digital media are considered part of the databases number. And then the final very, very tricky one is digital and electronic circulation. The magnitude of this problem was actually first brought to me by Jen Yu who mentioned it on either the ARLSS listserv or the CIC assessment listserv saying like, how in the world are we going to do this? This is something that we struggle with in general but are struggling to articulate for this particular question specifically. So the definition, report the total number of times digital and electronic units are checked out from the general and reserve collection. So as I said, this is not my area of expertise but one of the things that has been made funnily clear to me as I've become part of the assessment community is that we don't really have an analog for a digital checkout. So we're starting off with a strange premise. We're starting up trying to define something that we as a community accept that we can't really define. Include both initial transactions and renewals. Include transactions for units of digital electronic books and media. Do not count transactions of digital electronic databases. Do not count transactions of VHS CDs or DVDs as the transactions of these materials are reported under physical circulation. So how do we even start to answer this question about digital and electronic circulation? This definition from iPeds does not mention either of the counter standards that could be used and that at Chicago we're likely to draw on in trying to answer this question. Specifically, even if it mentioned counter it doesn't go into the difference between the different counter standards that are used by different publishers and platforms. So in trying to come up with a preliminary answer for Chicago, our electronic resources librarian contacted a couple of platforms or publishers to ask how they count used. So the two standards as many of you are aware, BR1 counts used by title. BR2 counts used by section. And section is not standard. So for eBerry, for example, eBerry uses BR2. eBerry counts a page viewed for 10 seconds as a use. But if you download the entire book, that is also one use. So the same equivalent of taking something off a shelf or checking it out for a year is a single use. Bringer also uses BR2 and they can't use by chapter. Or chapter view and download. And then if you download the entire book, that is also counted by chapter. So we have another just sort of impossible to bring those numbers together in a way that's reasonable. So I think the question we wanted to bring to this group is how in the world are you answering this question? If you have started to think about answering this question, looked at our books, at our platforms and determined that about 25% of our platforms and publishers use BR1, the remainder use BR2, we're likely just going to have to footnote this saying, these publishers use this standard, these publishers use that standard, and from those two numbers come our data. So other issues that were raised to us are why aren't electronic journals included? They're included in expenditures, which is significant, but they're not included in collections anywhere. Additionally, the advice on how to handle government documents seems to be contradictory in one place that you're told to include them and in another place you're told to exclude them. We've included a number of different places to go for help and advice including directly from iPads as well as Bob Duke's very useful lib guide that includes a lot of crosswalks and other documents showing the transition from the academic library survey into iPads and also how the iPads definitions match up with or don't other surveys including ARRL and ACRL surveys will pause and see people taking pictures of the slide. So at this point I think we just want to open it up to questions. Yeah, why don't we stay here? Open it up to questions and discussions. So as a number of you have started working on these questions for your institution, how are you approaching the questions of digital and electronic and also the circulation questions?