 I've been enjoying this conference very much. As much here in the rooms as out upon the hill. It's a beautiful place to be. And as I've been here, I've been thinking a lot about this question. And I've noticed, as I've been doing this, that doctors and authors do judge and assess OER quality, but who is actually evaluating OER? And why and how and could they improve and what are some of the issues associated with that? And so, as I've been attending the sessions, I've been asking myself in each session, so what's happening here that's evaluative? And I've been amazed at almost every session. Well, every session really. There are many, many evaluative issues that are being addressed. I've heard many presentations that alluded to evaluation issues indirectly and implicitly. Almost everyone does that. Few are also explicitly about accreditation, certification, searching for evidence of OER effectiveness, so on. I would call these summative evaluations. Basically, people deciding, is this OER something we want to use or not? Is this student that we've now prepared using OER ready to take a test or not? Is this OER worth certifying as being usable or not? Those are kind of summative evaluation questions. And a few have been exploring a more of a formative effort to use evaluation to try and develop or improve or better use OER. A session that I was just in a couple of sessions ago was on agile OER development. I thought that was a really interesting term. And it reflects well on what I'm promoting these days is Michael Quinn Patton's developmental evaluation. This came out this year on developmental evaluation, but he's been talking about this for probably 15 years. And the idea behind it is to do summative evaluation when you need to and do formative evaluation when you need to. But try and be agile. Try and be ready to adjust and modify the evaluation that you do with the thing that you're developing. Now, I saw in several of you probably seen this evidence hub that there was a presentation about it on Tuesday and then they were in part of the science fair yesterday. Anybody here associated with the evidence hub? Great. I feel very impressed by that. And several others like it, other things that I've seen here that are like it, where people are essentially saying, let's kind of gather the evidence as we go. Let's share it with people. Let people make decisions themselves based on what they're seeing and add their two bits. Once they've had some experience with something, let's have them share that. So anyway, those are some of the kind of the overview ideas that I picked up just in the last couple of days, that the OER movement seems to be very serious about wanting quality as well as quantity, wanting to be able to use this, what they get to do something useful. And so anyway, I applaud all of you for that. My particular interest is in the field of evaluation. I've been conducting qualitative case studies for quite a while now on people's evaluation life, their daily evaluation life. And this takes me into all kinds of walks and arenas. I became particularly interested in thinking about describing these phenomenologically or in great detail about how people actually live their evaluation lives. But I've also found that they seem to fall into these categories. There's usually something about stakeholders or the people who care about the thing being evaluated, and usually they don't care about it in the same way. You have the teacher on the one hand, the student on the other, you have the OER developer on the other, you have the technical person on the other, and each of these people might be looking at the same object in very different ways. And so that's a common theme I see across all these case studies that I've been doing. Stakeholders have different values. But what do we do about that? So part of what I've been trying to explore in my studies of these case studies is how do they define the thing they're evaluating or the evaluand? I shouldn't have done that. How do they describe the thing they're evaluating? That's what I'm calling evaluand here. What criteria do they use for deciding if the evaluand is up to snuff or not? What methods do they use for gathering information and what results and recommendations come out of that? So in preparation for today's presentation, I decided I would conduct three different case studies on people who are conducting or doing something with OER. And I have these three on all of us who was just getting into trying to put together an online book that they wanted to make available for OER and an innovator who wanted to share ideas about using OER with other people and get them involved. And then a school of a staff who adapt and generate OER to help their own students and to offer OER to others. So this is a... We can spend lots of time. How many more minutes do I have? I'm just wondering if I have 15 or 20 left. Okay. So anyway, this kind of summarizes here what the stakeholder issues, the evaluand issues, the criteria and questions, the methods and the results. And as I looked at the novice, I noticed that mostly she was interested in who else might care about my book. How can I get this out to people that might care? She didn't have a lot of information from those stakeholders, though. She just had this book that she had written that thought was great. The evaluand itself was the book. But then, as you think about it, a book never is just a book. It's a book that's used by different people in different ways. And it's adapted to those other needs that people have. And then the next question is, based on it being a book, how well does it cover the topic? What are its pedagogical qualities? Is it adaptable? Is it going to be too costly for them to download if they don't want to read online? That kind of question. And then methods for discovering this. A survey, this particular author was wondering, should I invite people to fill out a survey if they happen upon my book and want to use it? Or should I ask them to contact me and tell me stuff? Or should I just let them do what they do with it? Which is what I think most OER folks do. They just let people do what they will with it. And they don't worry too much from the author point of view, whether or not. Although, like I say, a lot of the presentations here have been changing my mind on that. I think there are a lot of folks who want to know, how is my stuff being used? And how can I know if it's being used well? So anyway, I want to go through each of these others. You can see that the innovator asks slightly different questions considered to other stakeholders. And the school had an even bigger set. They're interested in the teachers themselves who have to help create the classes for their students. They also have the parents of the students as well as the administrators and the students themselves. Scholars who are interested in all this OER movement and potentiality as well as potential adopters and sponsors. Well, as you look at that, just looking at the stakeholder line there for the school, you've got a complex set of people who are going to have lots of different values, contrasting values. And I'm speaking here to the choir. You all know this. And the main point I want to make is, I think as we look at this, that developmental evaluation is a good solution. I'd like to invite you to take a look at that approach that Michael Patton has been developing. He essentially reviews in their complexity theory and ideas behind the need for agility in an ever-moving target when we are trying to evaluate OER. It's always changing. And we're remixing it depending on who our students are and who our other stakeholders might be. So I'd invite you to take a look at that. As you look over this list, I'm hoping that you're thinking about your own context, your own situation. And what I'd like to do with the remaining time is I'd like to invite you to be sharing with us who are the stakeholders that you address? What are some of the stakeholder issues that they bring? And what are some of their definitions of the evaluands that you care about? And what are some of the criteria and questions that are asking about those OER evaluands? As well as what methods are you using or are you wondering about for gathering data on those topics? And if you have anything to say about results and recommendations, I'd love to hear that. So what I'm going to do now is I'm opening it up for you to talk about these things. If any of you have your computers here and you're connected, I would advise you to go to this slightly tiny URL. It's a bit longer than the URL. I mean, it's a lot shorter than the URL that Google Box created when I created the thing I'm going to show you next. But if you can get that in there, I would then like to go to the next slide and I'd invite you to create a name for yourself in the first column and then start typing. And then I'd like to hear from all of you who would like to share your thoughts about what are the issues for you surrounding these topics of evaluation of OER in general, stakeholders that might be involved and their differing views of what the evaluands are, what the criteria and questions are, methods, results, and recommendations. Does everybody have the URL in? Can I go on to the next slide? Help me somebody by doing everything else that I didn't want you to see. Where's my little slide for you? What I want is this tiny URL, so maybe I'll have to open it again. I thought I had it open and ready to go. Okay, so this is what I'd like you to be giving me your thoughts about. I will take the first line, the number three line here, but if any of you who are on want to just fill in your names, your fake names in these lines and then just go ahead and start typing things into these that are associated with these questions. Any of you who would like to make a comment, now is the time to do it. Please. You're asking under evaluation of OER in general, you're assuming that there's an evaluation already occurring? Okay. Is that what you're... No, not necessarily. I mean, that's a good question to ask. Is there an evaluation already occurring? That isn't always a good assumption, is it? I mean, it's an important thing in an evaluation to ask yourself, who's already doing an evaluation here? Probably they are making evaluations or they wouldn't even be involved, but they may not be formal ones. So that's kind of another issue, you know. Should the evaluations be formal? Informal, okay? Yes. There's a question of whether the institution has a research office that can help or whether the fact might be on his or her own. Okay. Yeah, that's a very good question. Good. Xena, thank you for getting in there. What else? Think about your own situation. Who are the stakeholders that matter to you in the work you're doing? Students. Students, for sure. Okay. Do students usually speak up for themselves? Or are they usually nice and quiet like you are today? The institution might have asked. Okay. Yes. The fact that you adopt the OER. Okay. Yes. Funders. Funders. Okay. Painty committees. What kind of committees? Rotional tenure. Okay. As you just look at that list, you can see right away that they don't all agree with each other, right? They're going to have really different kinds of issues. How would those people see their evaluands differently? Did you have an example of an evaluand that you've got different people who want different things out of it? Different ideas about what success would be? No. For students, success would be lowering the cost. Okay. I'm going to put that under criteria. It needs to be affordable for students. So I'm assuming that if that evaluand would be maybe a text, a course. A teacher. A teacher. Good. What else? Instructor wants it to be clear. He wants it to be accurate according to their perspective. And they want it to align with the way they use the course. Good. Yeah. My funders are looking on the impact of society at large or even more specifically even the at-risk society and so forth. Okay. What else? At least engagement on the student side received better use of their time. Good. So what are some of the methods that you might think about using for answering these kinds of questions? But also, let me just go back here. Other evaluands. I was thinking about these repositories. It seems like that was a big thing I heard about a lot the last few days. Repositories of OER. How do you decide if those are any good? One group I saw that they had panels of teachers who were getting together and saying in order to be in our repository we want to rate them against how many teachers feel like these OER would actually do what they want to do in their classes. So let me just skip over here. You could have teachers creating standards and comparing through web crawlers. I'll just say that because there are so many different ways you could go through it. That's kind of the idea that I got from the presentation. Other evaluands or methods for evaluating these? It's due to satisfaction, sir, that it is. Okay. When I first... Oh, go ahead. Well, I would just add to that whether or not it can be... whether or not it's standalone, essentially, or doesn't need that instructor-like component. Okay. I'm going to put that here as another criterion. Standalone or teacher dependent? Yeah. For students it's frequent feedback on their performance. Now, is that something you have experience with where people are giving frequent feedback or you're saying that's a method that ought to be there? That should be... That's... students are stakeholders who look for that and their performance depends on that. Okay. So, probably it's... it belongs to methods, maybe? Some way of automating it, maybe? Okay. Good. How much of this do you actually see going on? Do you have examples of results with recommendations that are coming out of processes kind of like these that we've been talking about? Yeah. Our biggest result is the state mandated tests for English, Science, and Math. We have to do on a secondary level. So, we look at the outcomes. I agree that it's effective to evaluate the content itself, but you're always going to want to look at what goes into it and what comes out of it on either side of the content to measure its effectiveness as well. Is it doing the job that you hired it to do? Good. Yeah. I came in a little bit late. I missed a part of it. It seems to me also that you're evaluating the way you are in a situation. You know, I've also had some kind of a standard for judging accounts. What comes to mind is the conventional test book type of class based on conventional technique. What is going to stick out standard? Okay, so how does it compare to the conventional, is that what you're saying? Okay. Yeah. Five more. Okay. What else? I appreciate TANF for adding those lines. One of the things I wanted to say is when I first was becoming aware of OER it seemed to me that it was going to be kind of impossible to evaluate it because you don't know who's using stuff. That was kind of the idea I had for the first year or two because I was thinking about the producer stakeholder. You know, the person that's creating the stuff and putting it out there and then it's just used by whomever and they do whatever they do with it and they change it and you don't really know how it's going to be used. But I've come to realize that hey, there are ways to ask people how it's going. That can be kind of obtrusive, intrusive and I think a lot of times people don't want to do that. But you can also look for kind of feeds that come to you from blogs, from seeing things created and just reading about them in the news and say, oh, that's related to the thing I created. I can see that. So you have some of these kind of informal ones but it seems to me that the main focus for me now is that OER is mostly to be evaluated by the users. The main stakeholders are the people who are deciding if they want to use this or they don't. They want to adapt it or adopt it and modify it in various ways. And to me, that's something to celebrate in a lot of ways because it seems to me that the whole idea of evaluation that historically has been the owner is in control of the evaluation and they decide whether the thing is working or not and they tell everybody else whether it's working or not and the receiver or the student, like I was saying at the beginning, often as the student, they don't really get their voice heard that much. When they're able to make their own evaluation judgments and do what they want to with the OER then their voice is more likely to be heard. There's been a big movement in the field of evaluation that some people within the field of critique called empowerment evaluation. David Fetterman out of Stanford is the author of that and a lot of people have said, well, that's not really evaluation Fetterman because what he's been trying to do is go around and get people to be better evaluators of their own stuff, their own learning, their own programs rather than waiting for a federal evaluator or somebody externally to come in and certify that yes, this is a good thing or not. And so traditionally the idea has been unless you're objective and can stand outside the evaluation to a certain degree you're not going to be able to make a good evaluation. Well, that debate's been going on now for 20 years. Fetterman's ideas are still around and I think they've been adopted in a pretty powerful way again by Patton's developmental evaluation. So I would highly recommend that you take a look at that. If any of you have a chance, I'd love you to go ahead and add to this matrix that Kane and Xenith have been helping me expand. I'll leave this Google Doc out there for a while and in case you didn't get this URL, I'll put this back up so you can write that down and if you have a chance, go ahead and help me fill that in. Thank you very much.