 The EDUCAS Core Data Service was created in 2001 and the idea was to help institutions figure out how they compare to other institutions. So essentially you have your group of peers, your buddies, and then you also have a group of institutions that you probably aspire to be like. And so the idea was, well, we're going to ask you guys a set of data, the core data, about your institution, how many staff you have, how many, which budget it is, that kind of thing, and we're going to put all that together and then you will be able to compare your information to your peers' information. So the EDUCAS Core Data Service has four basic components. First is an annual survey that all the worldwide membership is asked to complete. Then there's a data access service. All those who participate in the annual survey are enabled to compare their responses to the survey to other participants. Third is a set of aggregate summary reports that EDUCAS produces that don't disclose the identities of any of the participants but provide broad overviews of the IT landscape. And fourth, tying it all together is an appropriate use policy that clearly prescribes how the data that are in identified form may be used on other campuses. And here we are in 2011 and it's time to look at it again and say, okay, what's changed in the industry? And the answer is a lot, as you guys can imagine. So it was time for us to look at the questions and make sure that they were actually representative of what IT on campus is doing. And what we found is that some of them weren't, some of them weren't. The first thing we needed to do was find someone to help us lead this initiative for the members. Dana Uptigrove, former CIO at Yale and UT Austin was a great choice because he has had a lot of experience in many university environments. The next thing we did was create a member working group from large and small institutions, public and private, domestic and abroad to help really guide the process. There are 15 of them who've worked tirelessly since last March, helping us every step of the way here. The data collection process we put in place included phone and email interviews, in-person interviews, focus groups, a web-based survey that we did last summer, an alpha test of the first draft of the redesigned survey, a blog that kept the community informed of developments. So we've been very pleased at the number of members who've engaged with us. We gathered a lot, lot, lot of information that really showed us that our thoughts about the product were very much on target. Let me share with you some of the feedback we got. In the case of the survey, it was a wonderfully bimodal response. Some people said the survey is too long and difficult to complete. We need something streamlined and simplified. Others said they were disappointed that the survey didn't go broader and deeper and couldn't you please give us more detail. So our response was to substantially enhance and enlarge the survey content space, but to make the survey modular so it would be more straightforward for a CIO's office to delegate big parts of the survey to the cognizant director. And then many of the modules are optional. One thing we heard from the members loud and clear is we don't want to learn a whole new tool for munging the data. We want to use the tools we know how to use and in a lot of cases it was Excel or some statistical packages. So our response is to create essentially a batch downloading capability. So you can define which module are you interested in having access to, which institutions are in the peer group that you care about. Press one button and download all the data and then you can analyze, sort, select, graph, report any way, any way you want and you don't use your familiar tool set. Another piece of member feedback we got was that you want help understanding how to use the data. So you've got all this data, it's a gold mine, but what do you do with it? How do you begin to make sense of it? And what we heard was that our annual summary report, while it's interesting, doesn't give you enough information to really dig into the details. So we're going to have a whole new focus this year on how to use the data. We're going to be looking at case studies of other organizations using the data. We've also relaxed the acceptable use policy so that you're able to use the data to make your points in your presentations. As long as you don't identify, you know, X-schools data, as long as we're looking at aggregate data, you can use this as part of your suite of tools that you use to make the case on your campus. We envision in many ways that CDS is not just a database and a toolkit, but it's a community of practitioners and an intercause is in a position of kind of facilitating and brokering a set of IT leaders, planners, managers who are all working together in some cases in fairly adverse economic times to do the best they can do for their institutions.