 Analytics scare people to death. It makes people nervous. I mean, who among us wants to have anybody looking over our shoulder and evaluating what it is that we do? It's clear that this is about accountability. It's clear that we're looking for ways of being able to determine how well someone is doing at what they're doing. And the inference here is, well, if you're not doing well enough, you know, you're a problem child. You learn about the differences between measurement and evaluation. Measurement is numbers. You're not going to tell you much. They're numbers. What they're going to do is give you a representation of what those numbers are supposed to represent. It's the interpretation, the evaluation of those numbers toward the solution of a particular problem or the recognition of a particular opportunity. That's the rub. But of course, you've got to have the right numbers to do it. So there's a real interplay between the two, but the numbers will give you the things with which to work. The evaluation of the numbers is the place where you really need to be able to be clear on what you're looking for, what you intend to do with it, and how you'll know when you're done. So with PAR, what we really focused on doing is try to have a conversation from the educator's perspective about the data that are being used to evaluate the work that we are doing, where we are able to take those data and actually focus on the things where we as educators are being held accountable. So are we working with innovation? Absolutely. Are we a disruptive innovation? Well, our goal here is really not to disrupt. Our goal is to answer a really tough and thorny question at PAR, which is that we're interested in what we can do to look at graduation rates, to look at the focus on progress and completion, graduation. If you don't have common metrics to do that, it's really, really hard to come up with measures that will generalize. And when we look at public policy types of activities, performance-based funding or scorecards, we realize that innovative programs like online programs and competency programs are probably more at risk than even your standard programs. And if we're trying to raise the bar on doing a better job, then the fact is we had to come up with better ways of asking the questions. I mean, it's great to find students at risk, but if you don't know what you're going to do for them, you really haven't dealt with the problem at hand. So predictions on their own really aren't going to be enough. Knowing what you can do with the predictions really does make a difference. We've got 1.8 million students in the data set and probably close to 18 million course records of data. We started with six schools, and then we went to 16 schools, and then we've gone to 20 schools. We're up to 32 campuses right now in the sample. I mean, we weren't sure we were going to be able to get all these data from all these different institutions to actually give us anything other than a bunch of mush, quite honestly. We wanted to start simply, and we wanted to start in ways that were reliable and valid, and then figuring out could we generalize. And when we realized that we could, we could actually take data from students attending University of Phoenix, American Public University, Western Governors University, Rio Salado College, Broward College. We were trying to figure out, could we take every school from the R1 to the regionals to the two year? Could we work with for-profit? Could we work with state schools? Could we work with online schools? I mean, we've got data in the system that I think I mentioned that we started working with PAR as a project that would look at the innovative program, so we actually started with online schools. But when you're tracking students' success, it's really hard to focus on the delivery mode. You've got to focus on the students. So by focusing on the students and the outcomes, what that gave us the opportunity to do was to work backwards at the paths that students have taken. So now we actually have data that can answer questions like, how do students at for-profit institutions and state schools compare? Guess what? They're not that different. The differences are not between the for-profit and the state schools. The differences are between the two year schools and the four year schools. So that's important. I can talk about the differences between online blended and on-the-ground instruction. I can look at the difference between a full-time faculty member and a part-time faculty member in terms of student success. So what's delightful about the opportunity to work on it this way is that all these years, so many of us have been trying to figure out how we can do this. We've got a data resource that in fact is really going to help us answer some of these questions. So I'm pretty excited about where we're going.