 David's listed on this presentation. I don't know where he is. Maybe he's sleeping off his OER Doritas from last night. But I'm Seth Baral. I'm a grad student studying under David Wiley. This is part of my dissertation work. I'd hope to have the first part of it all done for this for this session. So this is more a work in progress. One of the things I did want to talk about is the first rule of the Delphi study is that you do not talk about the Delphi study. So if you are participating in the Delphi study, don't talk about the Delphi study, or you're welcome to comment, just don't identify yourself as a participant. I'll talk about what the Delphi study is in a little bit. I also wanted to point out my slides are available. My slides, I apologize in advance are a little text heavy, but they are available there. You're welcome to download them. I haven't openly licensed them yet. I'm sure I'll find one that I'll license I can make my piece with and you know by the end of the day. So I'll just keep that everybody good. Okay. So one of the things David talks about is the four hours of reuse or four hours of yeah four hours. So one is the reuse. So that's I have an educational video. I download the educational video. I use I reuse the educational video two and three are revised and remixed. So I revise, I alter it, I improve, I get rid of some errors, I do that remix where I take two sources and I put them together and then of course redistribute. One of the challenges of course, and we've talked about this at the conference, is that reuse and remixing is not occurring to the degree and frequency that we would like. And the question is why? So what David and I have done is we've kind of proposed that there are three, three, three general areas that are barriers to reuse. And this is also not just our idea, but based on our lit review of the literature we've seen. So the one is content. The content just doesn't fit. It's not what I want. Second is pedagogical. So I'm doing problem-based learning and this is a lecture and I just, I can't make it fit. And then the third is technical. And that third one is what I've chosen to focus my dissertation on, you know. You start out with your dissertation, the Salt World piece and you know, I got down to there. So a year ago David published a paper with some grad students with what you call the Alms Framework, the Alms Analysis. And it's BYU, so we have to make everything vaguely religious. So that's why we have the Alms. It was originally slam, but you know, it's not how we roll at BYU. So the A is access to editing tools. L is level of expertise required to revise or remix. Third is meaningfully editable. And then fourth is source file access. So are there any questions about this so far? We'll talk about the rubric. So the goal of the rubric was to break down each of these constituent parts into a section of a rubric and then run that and then create that rubric and then start looking at some OER. I do want to know with the OER the assumption is that whoever is doing the rating would have some technical knowledge. The intent behind this rubric is not to give it to your grandmother and have them decide how hard it is to technically remix something. So it does, there is a certain amount of knowledge we have. The goal I wanted to have is if you if you put a teacher through a pre-service teacher through the intro to ed tech, could they then take it and then use it. So and then third is I wanted it to apply to a variety of OER. So images, video, text, whatever I wanted it to apply. So let's talk about the first part. First part of the rubric is access of course. What we did is four on the high meaning highly reusable one on the low. So first question is the appropriate software application pre-installed in the operating system. Can I open up WordPad or text edit and use it? Second question is the appropriate software available downloadable from the internet. Can I find it through an internet search? We just kind of kept it a little vague. So was it not found? Did it require significant research? And we haven't operationalized that to say okay well I have to use 10 keywords. I had to use Boolean operators. We haven't operationalized that and we kind of deliberately kept it vague. One of the things I'll talk about through this. Oh there he is. He's slept off the OER. Do you want me to slip them off or slip them off? He's slept them off. Okay. One of the things I wanted to talk about with this rubric part of the intent is to make this rubric remixable and revisable. So you could decide how you can operationalize this and decide how much an internet search is for you or what kind of criteria you can make it work. That plays out in something like this. How much does the application cost? So I found something on the internet. I can download it. So now we're rating how much does it cost to purchase that product. And we notice we say significant or expensive costs. One of the feedback we got back was well why don't we operationalize that and say okay well 25, 50, 100 and 100 or above. And initially we may revisit this but we kept it deliberately vague because we wanted to apply. Because OER, expensive for OER Africa is different than expensive for MIT OCW. Same thing with how large is the program to be downloadable. We just say is is a large file size. David has this horror story of trying to download open office while at a conference using conference Wi-Fi. And so if you're in New Zealand, prohibitively large could be fairly small versus here in North America on my broadband. Large, I don't mind downloading 150 Macs. And then does the program just depend on other software programming libraries? So this is the Linux, the dependency hell. Anybody? No, I know just, I mean come on. But and then third criteria, is it available from a web-based service? So is it available through Google Docs? Can I edit it through some sort of web startup thing? And then how much again, how much does, there we go, how much does it cost? This one, this last one's a nod to Steven Downs. So this, you know, if I sign up, if it's available as a web service, how much is that sign up? How much is it, what kind of privacy concerns? How much of a sign up? How much personal information do I have to give out? Because, you know, it's great if I have an OER, I can edit it using a web-based service. But if I have to give my blood type, is that really, that should be factored, we think it should be factored in. And then just flat out, are you able to cut and paste the text? So even if the format isn't kosher, can I just grab it through cutting and pasting? So that's the A part. Now we move on to the expertise. And we ask what degree of editing expertise. And you notice on access we had one, two, three, four. We collapsed two and three to 2.5. And we said, essentially, four would be beginner, 2.5 would be intermediate, and then one is advanced. In other words, the more advanced knowledge that I'm required to have, in order to edit this OER, the lower the score. And again, we ask the same thing with video. So if your OER is not a video or it doesn't have an image, just flat out text, then you have that NA option there, and it just doesn't factor into the analysis at all. And then again, we ask for programming. And we try and give some examples here of what we consider to be beginner, intermediate, and advanced. So programming, probably a better example of the examples is something like this. If I'm audio editing, am I remixing various sources? Or do I have to grab multiple sources? Or am I just applying transitions? Versus am I just saving it in a different format? Am I just trimming 30 seconds off the front and back? So that's the idea. Next is editable. So what portion of the text in the OER is editable? So a nice example of that is a PDF. I scan a syllabus and it treats it as an image. I can't cut and paste that. I can't take that through Acrobat Professional. It's an image. It's stuck. It is not meaningfully editable. Same thing with images being stored independently, not embedded in a PDF. If I get an image, it has a watermark in it, things like that. And then we ask the same thing. I don't want to re-recute to death. But the idea is audio, then we ask for video, and then we ask for just a miscellaneous other components. I'll get it. Then the last one we ask in the OMS is the portion of the source files available. So if I can edit it, do you at least have the source files available to me? You gave me a PDF, but did you give me the Word document? Do I at least have that? Some kind of source file, whether it's text, images, and again video, audio, programming code is the source code available, and source files for anything else. So here's an example. This is one sample. This is a syllabus for visualizing cultures, it's off MIT OCW. It's text. It's on a web page. And what we end up doing is scoring it. This is how we would score it. So access, it was a 3.75. Level expertise was a three. Meaningful editable was a four, source files was four, because MIT OCW does a good job of at least letting you download the source file. This is source HTML. So the average came out to be 3.69, which is pretty low. All right. No, I'm sorry. It was pretty high. We've done others where we've done like real media, I hate real media files. Real media files were probably like a 1.6 or something like that. But you can see that this text from MIT OCW, because it's so easy to just kind of float around cut and paste. You've got the source files available, scores high. Everyone sees Steve Carson telling me I expected my check in the next few weeks. So let me talk about the Delphi study, what the Delphi study. So we wrote this rubric out, and then a Delphi study is a methodology that was developed by the Rand Corporation, hence the nukes. And the Rand Corporation would ask a question like, how many nukes would it take to destroy the US's ability to build weapons? And so they'd ask some industrial experts, they would ask nuclear scientists, they would ask a variety of people, and they would cut, they would each post a number. And then the researchers would then redistribute those numbers and say, here's the range. Well, one guy said five, one guy said 300. The stress in the Delphi study is on anonymity, so it's just one vote. I mean, you're just voting with that number. So no one gets weighed more. There's no undue influence. And that's why I've been paranoid about my Delphi study participants talking to other. I've got three floating around here at this conference. But the idea was to take this rubric and get experts in the OER field to tell me what they think about it. And the criteria I used, and this was a fun sell because this was something that a completely unrelated study did. But the idea was to measure it on three criteria. One is reliability, and that wasn't in the statistical sense. It was more like a face reliability. Looking at it, do you think I could reliably get this particular sell in the rubric? Second is desirability. How worthwhile is it to get it? And then third is feasibility, which is how practical is it to get these ratings? And so I asked each participants to rate each sale, or each sell. Some gave it the entire row of the same store. I was fine with that. It's, you know, people are busy. It takes time to do this kind of thing. So this is the results based on that rubric. And what you can see here is that of the different criteria, the mean score, and I should say, I should put this into context, each of these, they were rating each sell on a scale of one to four. So one to four reliability for each sell, one to four desirability, one to four feasibility. So now that you have that context, these are the averages across all the delphi participants. So what you see there is that on the access for that access portion of the store, it was a 2.87. So that was the lowest. That's where people had the most baggage. The lowest score is where they thought it was the least reliable, least feasible, least practical to measure the access. You can see that the meaningfully editable was the highest. On average, you can see, and in terms of the columns, you can see that reliability. People were nervous about getting reliability, but they had no problem. But they seemed to be in agreement in terms of the desirability and feasibility of what I proposed. And then you can see the grand total average in the bottom right is three in terms of how they viewed it. So for us, I wasn't surprised on access, because the access portion in developing this rubric is hard. And this goes back to some very old discussions we've had in this field about what is a property of an OER versus what is the surrounding context of an OER. And that's the challenge with access, is how much of that is inherent in the OER and how much of that is just a function of what I found on Google or what tools I happen to have right then and there. If I have the Adobe Creative Suite, my access to some of these file formats is much greater than, say, if I'm using the $35 tablet in India. So in terms of concerns they brought up, because in addition to the scoring, we asked for comments. And one of the concerns was about the cost of technical reliability. So one of our participants said, I can produce OER that has a high degree of technical reliability, but it would mean producing one third less OER. Because the expense of creating highly reusable OER is high. And that this technical reusability is expensive. Concerns about technical reusability, this was one of my concerns about developing the rubric. This rubric is not meant to be a measure of quality. You could have a fully engaging, fully awesome OER that scores really poorly on this rubric. One that engages in learning and everything like that. But this is just to measure that difficulty. And then I mentioned the concerns about the OER itself. So in terms of the future directions, we've talked about David and I doing an automated scoring for the access. Because of this difficulty of, gee, I'm searching the internet. I used five keywords. Does that count as significant research to find an OER? Is that difficult? What we talked about is just sort of having an expert driven sort of matrices that would have to be updated regularly. So in other words, you could as a score go to a web page. What am I looking at? I'm looking at an image. What kind of type of image is it? It's a JPEG. Okay. Well, here's a JPEG. Here's your score. And then here are all the different web-based services. Here are all the programs that you could potentially edit this in. So it's not. It's also helping in addition to just providing a score. We'll continue revising. This is the first. I have another three Delphi grounds that hopefully I'll finish before my wife kills me. And then after the fourth round, normally in a Delphi study, you have a point in convergence. So you decide when the standard deviation of the different ratings or the different scores goes to a certain point, then we consider ourselves done. We could be arguing about this for another year. So we just said we're going to do four rounds and then what we have, we have. And the idea was to pay grad students and pizza and then and do a several rating and establish an iterator reliability. David, do you have anything you want to add to that? The pizza is key. So I want to open it up for questions. Does anyone have any questions about this? I don't know if I'm early or late or what. So. Okay. How much time? Oh, that's good. Go ahead. So when you're taking your rate of reality out of your own feet. So, you know, some of those, as you identified, scores depend on the major physical environment. So one possibility, as we mentioned, is where we essentially take the access part out of the equation because we just give them a couple of predefined questions and then we just give them a rating for the A part. One is that the conditions for this study, they would be all in a computer lab so they would all have the same level access. They'd have the same software available. Yes. What do you feel the rubric is validated? Do you know of any follow on research programs and thinking about maybe transferring that over to the development and automated crawlers, tools, thoughts, things like that? It might fit some of these resources and utilize that rubric to reduce the sources. Yes. We've talked about it. We haven't had it planned because I want to graduate. You know, I said, let me graduate. You pass it off and then, yeah, and then I'll, yeah. So maybe I'll let me take that one. So, I don't know how many of you ran the OER in Luton yesterday, but the guys in the OER in Luton is one of the two we saw they ran a recognizer service. When you visit, when you're using the tool set, when you visit an OER up in the top right hand corner of your browser, it says here are other OERs that are similar to this one. So if we could fully automate, or at least partially automate this process, then in addition to recommending the OERs, you're actually recommending them to you in the context of their scores. You know, so integrating this with the work that the guys that we were doing seems like one really obvious place to start. And I think there are a way to use this one. Yeah, go ahead. What about, I mean, just thought maybe developing a set of guidelines how to make OER more accessible, you know, to raise the score? Yeah, I mean, you could certainly reverse engineer. I mean, once you start seeing these scores, you can reverse engineer. I want to be careful about being overly prescriptive, because I want to, I do want to, I do understand that there are times where just releasing a flash object is what you can do, and that's what, or it haven't helped you, a real media object, you know, real player video or whatever. So I, the intent here is not to say this is the one way, but I do want to, when we start automating these scores, I think it might be an eye-opener for some of these repositories, and they may need to start, and the hope is that's first in conversation, and they start thinking about, well, you know, I can't release, I can't do pretty HTML pages for everything, but I could do a plain text under these conditions, or we can automatically generate that, or things like that. So, yeah, you could absolutely reverse engineer. I just, my baggage with that is, then it becomes, that's the one true way to do OER, and that's not what I want to do, but I do want to, I do want to have it clear for those who are, who are serious about making things easy for teachers. You know, strikes me, strikes me about Tom Caswell with his, in Washington, they're doing it on Google Docs, which would score pretty favorably on this, and they did it for that reason that it would be easy for technical rears. Sorry, let's kind of want to do that. Yeah, okay. Yeah. I apologize if I missed it or I think it's the same, but is, is legal, legal issues or licensing part of the program? No, no, this is strictly focused on technical, not global. We could do a whole, yes, David. Yeah, so technical is far and way the least contentious part, right? So actually before we would do, you know, so set this one doctoral student, not another doctoral student, we'll probably continue this line of work. Before, I think before we even do legal, we'd probably be pedagogical first, which even the holy word, that would be a rubric around, pedagogical issues around reuse and how making things easy to reuse for you, would even be less of an older board than what we would get when we turned out to fighting the licensing fight. But I didn't think that there are these components, there's a technical component to reuse that could make senior guys pedagogical component and legal component. And I think some kind of explicit guidance would be really useful all these areas, but we've started with the most objective concrete one that you have to pull this number right and stuff. I picked these, it's a lot of hanging fruit and all that. Then you add a guy to the phone. Yeah, go ahead. One of my concerns with it, on the technology of it, is the persistence of the technology. If somebody's building a reader in something like softchop, what happens when softchop goes away, what's the likelihood that the technology will be there? That's something that we take into consideration. That is a good point. It would be difficult to operate, you know what, but what's Google's chances? I mean, I look at Google releasing Google Dart. If I put in a project out on Google Dart, what's the persistence of that? I don't know, we'll see. It's like buying still in Offerware 10 years ago. Right, so you buy an Apple product, build a Macro version, check out the life cycle. Go ahead. Yes. I know that Google said that, you know, there's a licensing book. We considered including a survey whether or not there's a technical record of the license, with the period of time and cycle. Is there a license in that area that I'm worried that it's not there? That's a good idea. I hadn't really thought about metadata in this as well. That's probably something I need to think about. Some of the challenge with that comes to author, because we care a great deal about having that licensing information, but there's not so much. It's kind of thinking, well, who is this rubric really for? I need to think around that, but yes, I think it's a good idea and it's worth considering. No, I think you were next. Okay, I have zero minutes. What's my six go? I'll take two because I'm belligerent. I'm concerned about ADA accessibility because if you have something that scores well like Google, it can be actually unusable in an ADA way because Google Docs are not accessible. So did you take any of that into consideration? No. I think we probably should. No. Yes. Would you please post the slide share information? Yes, let me go back. There we go. And then my email is just my name at gmail.com. I'm reluctantly on Facebook, but also on Google Plus. So if you guys want to contact me or I'll be here for the rest of the day. So I got a slide share account just for you guys. Yeah, absolutely. I think that's all we have time for.