 I think it's difficult to measure learning because learning is one of the most multidimensional things out there. And one of the problems with measuring learning is that sometimes we have to simplify it too much in order to capture data or to capture understandings of it that we can use. For me, learning is really too broad because it means so many different things. When you engage meaningfully in social practices and you are able to participate more successfully, to somebody like myself, theoretically, that's the primary form of learning. That's what I care about most. But you can't measure that. It's too contextual. It's too idiosyncratic. The knowledge is so bound to the context where it's used that you can't measure it in a psychometrically meaningful way. So in some fields, in some disciplines, we actually have some really impressive tools to try to figure out what people know and what they can do. So in computer science, we have these really cool computer programs that can read other people's computer programs and say, oh, well, you passed these engineering challenges and failed these ones and your code was this parsimonious relative to other people. And here's a bunch of feedback we can give you in other fields, particularly fields in the humanities and professions. We have much weaker tools for measuring learning. Learning is one of the most personal things that we do. I mean, nobody else can do learning to us. If we're not presented with conditions or motivations to actually attend to what's going on, take it in, and start thinking about it, the idea of having learned something, it's not going to happen. The construct of learning as a cultural construct is really difficult to define and measure. If you're looking at particular behavioral patterns or outputs, I think we can measure those types of things. Can you get inside my head and see how I've changed? I'd rather doubt it. But can you watch how I might behave as a result of our interaction? You absolutely can. Why is it hard to measure learning? It depends on what you mean by measure. Why is it hard to find reliable, simple proxy indicators of learning? I would say probably because there are no reliable, simple proxy indicators that we can trust. So we're asking the wrong set of questions, it seems to me. Learning is everything from curiosity to memory, everything from imagination to being able to paraphrase and say it yourself. I will even use the you word, which I was told at an ELI many years ago you shouldn't use. The you word is understanding. Well, we can't use the word understanding because that's one of those cognition words that's all about things we can't measure. But we do know that certain kinds of things will indicate understanding. We do know that when students teach something to each other that they've just learned, that seems to increase transfer across domains. It seems to reinforce things in memory. It seems to give them a richer kind of approach to what it is they're talking about. There are cultural things to think about, social implications. There are individual psychological things that you might not even ever be able to capture in something like a learning analytics platform. So when you think about learning as very complex and somewhat of a black box, that makes it more difficult to respond to the data that you see on students learning. So just our ability to assess any kind of competence is one problem. A second problem, which is actually much more significant in these large scale informal online learning places like MOOCs, is that we don't know what people's competencies were coming into a learning environment. So if you're going to pay to take introduction to physics at any college, it's unlikely that you know the material of introduction to physics. Because otherwise you would take another class and you wouldn't spend your hard earned money doing that. In the MOOC space, we get that all the time. So we have this incredibly wide range of learners coming into our courses. Some of them are total novices to a field. Some of them are subject matter experts, our instructors who are looking at figuring out how other people do things. And it's perfectly legitimate for someone who's an expert in physics to come to one of our courses and say, I wonder if I can pass the MIT physics class. And the fact that they got perfect scores on all the assessments doesn't actually mean they learned anything in our class. So it's a measure of their competence. But you could think of learning as multiple measures of competence strung together that you can associate with a particular experience. So ideally, we'd be able to say, a person coming in could do this. And a person leaving could do this. And we're pretty sure that it was our course that created that growth. So you have to have new technologies for assessing learning. You have to be assessing people's learning at multiple time points. So that's some of the complexity in measuring learning. Measurement really involves constructs, like achievement. Achievement is a construct. You say mathematical achievement. And that is really the realm in which you can get into what a psychometrician typically would refer to as a measure. And that you talk about things like reliability. You begin doing this magical stuff that places like ETS do, where they build these amazing tests. I mean, you can say what you want about those tests. But they measure something that people care a lot about. And they do it very efficiently and very precisely. So I would say that when you're talking about measuring, you're really talking about either psychological constructs, like self-efficacy or achievement. The kinds of analytics I'm most interested in are analytics that are not simply for people who are behind the scenes or not simply for institutional kinds of assessment. And they're not simply about signaling to the learner when he or she may be about to fail. For me, one of the most exciting ways of thinking about analytics is devising a medium to reveal to the learner what kinds of connections he or she may be making that they might not be aware of. The kinds of connections with fellow learners, the kinds of connections across courses, across disciplines that may be in their minds, but they've just never realized them. One of the ways to do that, we're exploring now at VCU and it's an outgrowth of things that have been happening at Mary Washington and elsewhere. It's just getting students to narrate their learning on the open web, not to answer a question with a right or wrong answer, but simply to reflect on the story of their own learning as they go through all of these curricular opportunities that we've helped to design for them. My conviction is that a really interesting kind of analytics should reveal to the learner even more possibilities for their own connected learning. The analytics shouldn't simply be a kind of a diagnosis of what's happening now, but analytics at their best can be a doorway that suggests what else is possible. I recently saw a tweet just today, I think, here at the conference. Somebody said, learning isn't magic. It's a series of things that students do. And I kind of didn't like that. It's not that I want it to be a complete black box, but in some ways, learning kind of is magic. It's a coalescing of factors that we might not even understand about humans and about each other. And so to oversimplify it to me is a problem. To make it too complex to understand, that also might be a concern. So maybe somewhere in the middle, we can find a good definition.