 So, I think the idea of these breakout sessions are that they're key to the three pillars of the OER ecosystem, high quality supply, implementable standards, and supportive policies. And this particular session is devoted to implementable standards. So there are four people who are going to be speaking for about five to seven minutes on a variety of topics, Jennifer Childress from Achieve, Todd Rose, who's affiliated with the Graduate School of Education and is also a researcher at the Center for Applied Special Technology, Judah Trevianus, who's from the Flow Project and Lisa McLaughlin from ISCME. Each of them will talk for about five to seven minutes and then we'll pull some chairs up to the front and we'll be able to spend the second half an hour in more of a back and forth kind of scenario. Yeah, would you do that? No one gets more than seven minutes. Yeah, boom. Hit them with that at four. Great, so they have an order worked out that I will not intervene in, but let's go ahead and get started. I don't have very many slides, for some reason people are complaining about PowerPoints. I don't know. I'm a scientist. I like them. So whatever. But I wanted to talk today a little bit about state standards, state content standards and I discovered from conversations in the past few days that that is not a widely understood term and it could be that the people who are left here in this room are the choir and so it doesn't really matter anyway. But just in case, because I don't know all of you, I wanted to just briefly mention what it is that I mean, what it is that Hewlett means when we talk about common core state standards. These are content educational standards that states adopt as policy that set the bar for the system. This is not the ceiling. It's the floor. And so it's a very important thing to know. The common core state standards are kind of exciting for a few reasons. One is, oh actually I forgot, where's the little, thank you so much. So most states had pretty low bar prior to adopting the common core state standards. And now we have a slightly higher bar for the states that have adopted. And what that means is that students who were not held accountable, not held to expectations that were very high, were having had a let's say just lower standard of education. And as you can see, now we have 45 states and the District of Columbia that have adopted actually Minnesota adopted English and not math, so it's kind of 45 and a half. So these common core state standards are for math and English language arts, two of the kind of core subject disciplinary areas. Science is coming out early next year. It won't be called common core but it will be very similar, ready for state adoption. This is very exciting because a lot of states have policies and laws that tell their school districts that they have to adopt materials, curriculum materials, structural materials and professional development training that align to their state standards. And now we have all these states actually collaborating, having a new culture of sharing to use the same standards and so you have a really different set of opportunities for the states. I've heard a lot of fear about this, what this means for the OER community, but it's actually a good thing for a few reasons. With one particular set of materials, if they are aligned to these standards that all these states are saying you have to have materials aligned, this offers a greatly, greatly increased opportunity to get your materials out there and adopted. Also because the states have all just changed their standards, they are all in a rush, a mad, mad rush to go around and find new materials. So teachers are currently trying to supplement their textbooks with pieces off the web and this is kind of just an accepted thing right now. It's a good time to look at changes. So as I mentioned, the states are in an unprecedented way working together and this is creating all kinds of different mind shifts and how they think about sharing. For instance, they want to come work together with my organization, achieve, they want to work together with other organizations to think about how do we choose materials. Let's think of common ways to look at the material adoption process. This has never been done before in the U.S. So to take advantage of all of this, Achieve created a set of rubrics to help states and help users and actually help material developers know how well they align to the common core. It's not enough to just have the right keywords in there to say I'm aligned. And so we have a rubric to actually say, does this align completely? Does it match some of the performance expectations or maybe some of the practices? Not all. Does it match fewer? And so on and so forth. We also created several other rubrics to look at other aspects of quality such as explanation of content. Is this a good tool for a teacher? Does this have good practice problems? And with these rubrics, there's the opportunity to look at different elements of quality rather than just saying a material in general has five stars. What does that mean? Does it have pretty pictures? Do I like how the wording is laid out? But in this way, you can actually say not only is it good in this particular aspect of quality, but I have actual rubrics to say why. And those rubrics allow a much greater degree of inter-rater reliability. These are the eight rubrics. Seven of them are now coded on OER Commons that you will hear about in a little bit. And that system allows users to use this right now. It's just on PDF form on the Achieve website. And that's not very useful except to curriculum developers, but on OER Commons and on other kinds of websites that can be used in an interactive format. So that is all. And I will be happy to talk more standards type things later if you guys have questions. So I'm going to do my best to stand behind a mic, which doesn't work with my temperament. I wonder if you won't be able to hear. So I want to make just an argument around, we talk standard. And the talk is variability is standard. And I mean, at CAS, where I work, we're actually like, we know standards are important. We also get very uncomfortable when we talk about it because they often lead to sort of rigidity and we know what's best. And that's its worst case scenario. But I want to make an argument that there's something to be had from the modern learning sciences that's emerged around variability, around what we actually know about how people vary in learning. Most of education is a history of dealing with that, whether it's in grade levels, whether it's in special education, those kind of things. So I want to just talk a little bit about the need for, we can call it standards, we can call it guidelines, whatever, of some sort of principled way that we can design more flexibility into the open system to make it more effective. So at CAS, we developed a framework called universal design for learning. And it was an attempt to look at, once again, like a principled way to identify aspects of variability that we know from modern neuroscience and learning science, that people vary naturally along these dimensions and that we can design for them in advance. And as you'll see in the next talk, it's actually, we can use data more effectively in these kind of things. But I'm violating the first rule of public speaking, which is I actually don't want you to read all the text, you get the visual of it, because I actually want to say, none of it really matters if you don't fill and appreciate what I mean by variability. So I want to take an example of a pretty basic task just to show you how much natural variability is there and what it means for design. So the Rubik's Cube, how many of you actually ever tried it? Okay, good. I tried this in high school and no one had in my son's class, so that made me feel old. But it's got to be one of the most simple but frustrating tasks ever devised. I've never actually finished one. But the objective is pretty clear, blue on one side, yellow on the other. But of course, some of us have tried it, a few of us have completed it, but I would say very few of us are what I would call expert at it, which is there's this group of people who can consistently solve it in under 30 seconds. And what struck me is not just that there were people that could do that, but then when you actually get into their little expert circles, it turns out there are at least 12 different strategies that you can use that will get you to expertise solving that cube in 30 seconds. So for example, if you are good at pattern recognition, there's a strategy that relies on your ability to quickly identify patterns on the fly and adjust your strategy accordingly. There's another one if you're not good at pattern recognition, but you can memorize algorithms. If you can memorize 50 algorithms, I don't know how you do it, but some people can, you only have to identify the first pattern on the cube the first time and you could actually solve the cube blindfolded. And so it can go on and on like this, where there's all these different strategies and the one that it would be best for you would depend on your own particular variability. Now, by the way, I can only tell you one strategy that guaranteed it doesn't work, I actually did try as a kid, which is where you take the stickers off, right? It's like, it turns out that people can totally tell, right? And it takes way longer than 30 seconds. But so this is a silly example of basic even strategic differences to the same simple outcome, but I want to add one more dimension of variability here and keep using the Rubik's Cube example. So this is a colleague of mine, TB Raman from Google. And he's one of the smartest guys I've ever met. He's also been blind since he was a child. And by the way, if you have an Android phone, those keypads that, instead of being a fixed keypad to dial, wherever you put your thumb, that becomes the five and you can move around. It's quite an innovative thing that he helped develop and clever. But so here's the thing. Just think through for a second, if Raman wanted to demonstrate his ability with the Rubik's Cube, wanted to learn, show what he could do. All the strategy in the world wouldn't really matter unless we addressed another aspect of variability. In this case, it would be how the cube is represented. So color is a purely arbitrary choice to represent it. And we could have chosen something else. And in fact, we could address Raman's predicament here by simply designing a different cube with a different kind of pattern when they've done this. And if you look online, everyone's really excited about this new cube. It's beautiful. And at first, when you look at it, you think, wow, that is beautiful. And it is beautiful, but it's not very good design. Like, why is it white? Now you have to know Braille to be able to do this cube. So now we've, OK, so great, you can do it if you know Braille. But now you've created two cubes to solve the same problem, so you don't get any economy of scale. But of course, that didn't have to happen. We could have thought from the beginning about variability and representation. We could have designed something that had color and Braille, among other things. And if we combine that with variability and strategy, we start to have a learning environment that has a lot more flexibility that would allow people's talents to rise regardless of their particular variability. So like TV Raman, it turns out he's actually one of the record holders in the Rubik's Cube. So he can do this in 24 seconds. I've shortened it for effect. And what's funny is, of course, he has that different kind of cube. There's Braille as well as color. And so he has accommodations, right? I don't know how Braille would be. Your brain reads Braille. So it's so much harder for him anyway, but it still astounds me that he can do it. I just want to say, back to this idea, so that in any task we have, any learning environment, there is a ton of natural variability. And we have a lot more knowledge about how it is we can design for that. And it's beyond this idea of what your learning style, all this stuff, where we're just simply trying to put some kind of description on the person themselves. It's much more dynamic than that. But we can design for it. And I think there's a need for, whether we call them standards or some kind of guidelines that give us a principled way to start thinking about how you add flexibility into these environments. And the open environment, I would argue, is inherently, has an inherent advantage in terms of creating more flexible stuff. Because for example, if I create something for my course on neuroscience, and I can deal with some variability, but I have blind spots. I can't do everything. But if I license it the right way with Creative Commons, someone else can pick it up and add to it and add different aspects of variability. And as that starts to aggregate, you actually can end up with something that is phenomenally flexible. And my hope is, and just to wrap up, that not only do we, if we're going to use standards for variability, we have to make sure that the standards themselves are also open to change. That is that if we see them use, the frameworks like Universal Design for Learning should be modifiable, should be falsifiable. Thank you. So I'm going to continue on the same vein. My talk is called Converging on Diversity. We know we need diversity. Is anyone familiar with Scott Page's studies, which show that if you have the more diverse a group you have, the more likely they're going to predict, plan, and create creative solutions. So it's not putting the brightest people on the team as putting the most diverse people on a team. We definitely celebrate outliers. I don't need to quote Malcolm Gladwell. If you look in the paper, all of the bios and features of specific people are usually outliers. The story is usually how I've survived my education and succeeded despite my education. And yet, our education system is designed to achieve uniformity, discourage, deviation, reward, conformity. We agree that learning is a complex, highly nuanced, multifaceted thing. And again, you don't need to read all these complex things. Yet we line up everyone on a single scale. We know that much of what we teach will be outdated by the time our students graduate, that we need to develop learning skills. Yet most of our resources are static. Our focus is on content. So where are we going with OER at the moment? Well, I'm Canadian. And does anyone here know Leonard Cohen? I was going to play the Leonard Cohen in the MP3 file. But I decided not to. But there is a lyric that I particularly like, which says, ring the bells that still can ring, forget your perfect offering. There is a crack, a crack in everything. That's how the light gets in. And I think what we need to do at the moment is focus on the cracks. And there are many, many cracks in our educational ecosystem. You just need to Google education and news. And the news is definitely there. Everything from dropout, nation-disengaged learners, learners who are marginalized, learners who are reluctant learners, literacy levels dropping, numeracy levels dropping, we had the statistic regarding our place within PISA in terms of countries, nations that are trying to rebuild education system. There is more than enough rich fodder in terms of cracks. And one of the things that we're doing to address those cracks is to rethink what we think of accessibility and what we think of disability. And I don't need to give you the standard sort of statistics about the likeliness that you're going to experience a disability before you die. But we've decided to completely reframe this thinking and see disability as a mismatch between the needs of the individual and the service's tools or environment, not a personal trait, but relative to the context and the goals. So what do we mean by accessible design then? It's determined by the match or fit between the user and the tool, service, or environment. And with one of the projects that we have called Flow, we recognize that every learner learns differently, that we need diverse learners. And with digital resources and delivery mechanisms that are digital, they can be easily reconfigured. And we can create plasticity and flexibility that will enable this diversity. We all learn better if the education environment and content matches our individual needs. And we can deliver one size fits one learning. And that will address, if you think about it, many of the cracks that I just mentioned. So Flow is a matching service for learning. And there are four main parts. The first is we talked about in deep learning, about learning to learn. The first is a discovery system that allows each learner to discover, explore, declare, and refine how they learn best. And then there's a service that transforms the resources, augments the resources, or replaces the resources to match those particular declarations regarding what the learner needs. And then we have a demand-supply pipeline of possible producers to meet the demands and fill the gaps. And here we take advantage of crowdsourcing and all of the various services that are already there, and peer production as well. And then lastly, we have a feedback loop that creates a dynamic research engine that allows us to continuously refine and continuously improve our matching. And of course, we're powered by expanded, diverse community of producers, peer producers, derivatives, reuse, and resharing. And there's a Flow video which I won't play, but I invite you to go to Open Education Week and look at the video that's there. So what does this have to do with standards? Both Todd and my talk talked about, we're supposed to be talking about standards. Well, one analogy I frequently give is if my family goes shopping, the only way we can all go to our specific destinations, whether it's the gaming shop, the tools, et cetera, is if we have a common meeting place. And diversity, in fact, is not inconsistent with standards. Standards enable diversity. And so what we're doing in this area is trying to standardize flexibility, multimodality, updateability, diversity, continuous improvement. And the link that I show here is the inclusive design handbook which tells you about how to do each of these things. So piggybacking on both Jennifer Childress's presentation in Utah's at OER Commons, which is the project that I manage at ISCME, we're working on looking at some of the models that have been talked about by my co-panelists and implementing them within our OER system. So we've been curating OER since 2007 and originally had a Pokemon collect-a-mall OER approach to aggregating all of the resources that were initially out there in the field of OER and have moved recently to a much more refined curation strategy of thinking about how we manage the OER that's in our collection and how we interact. We have an OER fellows teacher program where teachers do professional development in our system and do some co-creating. And so we've implemented a couple of tools that sort of reflect the on the ground status of some of the models that were just talked about. And one of them is the achieve OER evaluation tool. And I just kind of wanted to demonstrate what that looks like quickly. So you can get an image of what it looks like to take something like a rubric that's a pedagogical assessment and throw it into an OER system and see what you can do with it. And for us what we've found is that this really enhances the quality of the OER that we have and our ability to understand what's good about different aspects of it. So this is a resource page on OER Commons on a resource that has been evaluated using the achieve OER evaluation tool. So you can see on the right there that you can sort of mouse over this graph and have an understanding of what common core standards this particular resource has been aligned to and how different users found them on the rating scale from superior to not applicable. And you can also go through the process yourself of evaluating a resource. And so I'll open that up just to show you what it looks like. Because it was quite a challenge technically to put a condensed academic rubric in a tool. But I think it came out really well. So you can learn about evaluating OER there. Start evaluating. The tool itself was set up to lay on top of the resource so that you could easily go back and forth between the resource and the evaluation process. And if you wanted to you can minimize the thing and just look at the resource or you can go through and sort of, I'm not gonna demo it fully, but just to give you an idea of how you might think about implementing a tool like this in your system, the code for this particular tool is open and available on GitHub. And we're also interested in looking at working with partners to implement this in their own learning management systems. So you can select, I'm gonna mess up the data here. I have to go in after every time I demo this and remove it, but say this particular resource is found to be superior, you could select that or go on to the next rubric. And when you go through the entire process at the end you'll be taken to a summary screen that sort of shows you your score and the average score for that particular resource. So that's one application of one of the principles that we were talking about here. Another project that we have relevant to this is, this is our open author tool, which is a authoring tool for OER that we're working on rolling out shortly that uses the flow system and the learner adaptations to create an environment where you can intentionally create OER that has universal design principles in mind and sort of understand the resource that you're creating or the resource that you're looking at that's in our authoring tool based on metadata elements that we're inferring about its accessibility. So in this particular resource you can see some learner adaptations that are available about it on the side. And of course I can't scroll down which was what I was afraid was gonna happen. I'm gonna delete this text. Basically this is a very, it's like a fancy wiki. So if I delete that, that brings that up. So this particular resource you could look at, mouse access. So is it possible to use this learning resource using the mouse only keyboard access? Is it possible to use this learning resource using the keyboard only? Can you download this resource as an e-book? These are examples of transformations of the content that you could do. And also hazards. So we're thinking about different environmental considerations that might come with this particular learning resource and using it in the world. And translations. And so as part of this project we're looking at a translation tool within this authoring environment that sort of facilitates some of the translation works. Thank you. Translation work that teachers are doing in our fellowship program. So another flow based tool that's in this environment is learner options which allows you to change the display of the material to sort of fit your needs. So this is a toolbar that you can just pull down from the top on any part of OER comments. Shortly, not today. But so I can do different transformations here. I can change the resource so that it's black and yellow. I'll reset that. I can change the spacing of the resource if that's a better fit for my visual needs. I can change the size of the text as large as I wanna go. I can also change the font. It also gives you access to ways to show a table of contents for on pages that don't already have a table of contents inherent and reorganize the material on the page in a way that's more suited to your learning style. And you can also emphasize links. And so basically it gives you a whole suite of tools in ways that you can sort of change your learning experience. And we think that by employing these kinds of tools in OER environments, that it gives us a real chance to be an edge innovator and to kind of come in in some of the places where traditional systems aren't looking at or don't have the flexibility to be able to sort of implement these kinds of innovations right now. What we can come in and sort of meet these very high priority needs and then hopefully get much more attention drawn to the movement in this way. And in terms of the standards, it's also, Jennifer mentioned that people are really scrambling to look for content that's been aligned. We find that with the teachers that use our system. And this is a great sort of opportunity for OER producers and aggregators to sort of look at tagging items to the common core and thinking about making that supply stronger. So that's just some demoing. I'm glad it went as well. There's one more feature if I have a moment. One thing that this system also does is it enables universal subtitles. So I don't know how dangerous this is to demo, but it's pretty cool. Because if I insert a URL and all goes well, the video will show up within my learning resource and I'll be able to see on the side automatically updated metadata about the video subtitling that's available for that particular resource. This particular resource has been translated. So if I mouse over, I can see the English or French subtitles and clicking on this link will draw me into a subtitling system where I could add an additional translation in another language or modify it further. That's it. Thank you for coming. Oh yeah, I should have made it. I mean, there's... This question is for Lisa with regard to the OER common software. I was curious about, for the evaluator, is the tool proactive in feedback to the creator or is it reactive? And if it's proactive, how frequent does the creator get feedback that the stuff that you put out here, not only is not valuable for Common Core, it does harm or this is the best thing since sliced bread, that sort of thing. Right now, the only feedback to content producers is through the ways that we're using it within our professional development settings. I mean, Jennifer can speak about feedback in other regards, but we would love to implement a queue and that's something that we've talked about developing because we have relationships with 350 different content providers that contribute to our system. And we've talked about pilot experiments with, for instance, like KQED and PBS, learning around some of their materials to sort of have a feedback loop where they can find out what... We do collect comments within the tool of specific to rubric that could be potentially sent to someone. Yeah, and that's a very important point. That was the intention with the comment section that that would be available and not just a general comment that was hard to access, but that that would be available and go back to the content provider we'd have to work out all the backend stuff. In terms of the authoring tool, though, that's within OER Commons, the open author, it does show icons. The icons change according to what you have, in fact, achieved or not with respect to the accessibility standards and translation and those sorts of things. So with the rubric tool, you had mentioned you're trying to solicit use by other sites, and I wonder if you could describe a little bit about how that works? So one of the... We've collaborated, actually, with Steve to work on an experimental node of the learning data from the evaluation tool to the learning registry. So it's going there right now, but we would love to speak with potential consumers of it about how they're consuming the data and are definitely looking for sort of usage data partnerships around. So if someone else implemented this tool on their site to evaluate resources within their own environment and also created a node for the learning registry, we could share back and forth what was going on with the particular resource. So ideally, if I'm finding an average of two, a score of two along the quality of assessment rubric on my site and you're finding on your site that your users think it's a superior resource, we would be able to average that out across the two sites. That would be the ideal for how that tool would work so that we'd be aggregating a large amount of very specific feedback on individual learning resources. I'd like to just quickly comment on the metrics or the information that's attached with each resource. The flow project takes a somewhat different approach in that we're evaluating the efficacy of each resource to a particular individual learning profile. And so the scale of what is a quality resource versus not a quality resource isn't on a single scale, it's according to each learner. And so the data that we gather on what the resource is useful for actually diverges and we continuously refine because part of the system is to adapt, reconfigure, request derivatives, request augmentation. So the resources actually diverge in terms as they're used and as we gather further metrics. But the interesting piece about that is that it acts like a fairly dynamic research engine regarding learners who we don't really have very much data about. The learners that are usually eliminated as noise in any data set. And so this is a continuously contaminated but fairly dynamic research tool for finding out how those learners learn. And that information is attached to each resource. So just a brief reflection on that that I think we have been trying to find similar dynamic data sets that different websites can surface whether learning management systems, learning object repositories or whatever they might be. In the last year and a half, I just haven't found very many organizations that are capable of handling more than one data set per resource. So the notion that I'm a teacher and I work with third graders, I'm a teacher and I work with EL kids, even that level of granularity is just not in the data models that people don't wanna consume it. They just aggregate it up into one number, 4.5 out of five stars or whatever. So I'm not trying to diminish what you're doing at all just to express a little bit of frustration that we want that kind of data flowing through the community through any means, learning registry, anything else. And we just can't find enough consumers. So supply side, we can generate it but we can't find people who wanna consume it or enough people. So we actually consume it with a low system so that it's used to refine the learning experience for the learner. Right, which is great. So I'm applauding that but I'm just observing that we have a supply, a consumer side problem on this that there aren't very many of you. Because obviously you have a very specific focus within the learning system, I think. Actually the intention is to apply this to all learners that all learners learn differently and we want optimized learning for every learner and it's not just, and the discovery, the learning to learn, metacognition component of it is for all learners. We all need to figure out how we learn that. Can I add to that? I mean, I think you're pointing to something really important which is that I don't think we've done in our job communicating to the public what the problem actually is. So this sort of myth of an average learner is so pervasive up and down the whole system that people don't even realize that they buy into it and I think that you see this because most people call for something like personalization or customization then you realize you can never get there on average. It's literally impossible, right? So I think you're exactly right and I think part of it is we need to get ahead of this a little bit and start to explain the problem in a way that the public understands so that the solutions actually make sense. Yeah, totally agree. I'm just observing and maybe you can have, you have a solution that you can share with other projects but this normative aspect of the learning data is so pervasive, right? It's just baked into these data systems and it's no fault, right? It's just much, much easier as an engineer to build a system that works that way. So what I'm worried is that there's several pieces of gap here, one which is the engineering componentry. We didn't build our system to accommodate these diverse kinds of data sets. Two, the people using our system and the user experience is not designed to accommodate it and three, the whole intellectual capital on top of that of why would I do something that's non-normative isn't there and so we can fill any one of those gaps and the system still doesn't flow so to speak. And in fact, I mean, we're challenging to some extent research methods as well so educational research methods depend upon norming data. If you look at the learners at the margins there is really no useful information that's been gathered. By the time you've replicated the single case studies or whatever you have available to you sufficiently it's not relevant anymore, the system has moved on. And so this is to some extent also a challenge to the traditional research methods, not just engineering, not just education but at even more of a core level. And when I said we continuously contaminate our data we do because we need that quick implementation and so we are breaking a lot of the research goals. Well, can I say too, in the area of analytics it's really frustrating to see that we're just gonna basically carry over our old psychometric models and you're gonna get more of the same and if you look at analytics in areas like finance and intelligence and risk, aggregate data is useless and it's like obvious but of course there was a market for that and there's a whole system set up so where you have whole different dynamic systems modeling and things like that and so I think this feels like a threshold problem that we actually have to look at all of these factors and push simultaneously. There we go. I just have a question. We've kind of taught K-12 teachers in particular that if a child is under 13 you can enter no profile data and you can't even put in their real name as well as the HIPAA concerns of sharing a specific learning disabilities attached with a child's name. So basically what we've done is all children 12 and under here particularly here in the United States we're making it so that you can't even collect data on those because pretty much what's been told to teachers is that if they're under 13 you can't put any data in on the child period no matter what it is and then you don't have to worry because we don't know how if they're using that profile data for what kind of research. I mean there's a line there but there's been a lot of discussion that K-PA is really outdated and is not written properly and it's causing sort of a mis-service to kids that are under 13 in many different ways both socially and in education. So how do you get around the COPPA requirements when you collect this information on kids under 13? So it takes a while to get on. Canada is even more sensitive about privacy and in fact brought many of these to the international front and we're frequently criticized in the systems that we create because they don't, at the ISO level because they don't conform to the Patriot Act and where you've got too much privacy stuff in there. So the system, right, or I can only talk about the flow system. The identity of the student is kept completely independent of the actual learning requirements and requests and in fact each student can create multiple, depending upon the context, multiple requests. So they can say at night when I'm tired and I'm at home and I don't have good bandwidth, this is what I want. So the privacy is very, very specifically built in there. We've had lots of requests for the connection of the identity of the student with the information about what they require or what they've set up as their needs and preferences profile but the reason for that is simply that because of the privacy issues. My question is about the sustainability of flow and how you're able to staff. I mean, are these individuals creating, after the student submits the information, are these individuals that are figuring out what the right composition of resources is to meet that individual need or is that automated? So the determining, sort of the discovery and refinement of the first declaration of what do I need as a learner is through a series of interfaces, interactive activities where you can try out a whole bunch of things and it leads you through possible options. Then you create that, you make the request. It gives you a resource that matches that and then you provide feedback to say yes, that worked or that didn't work very well. And what we're doing is we're reaching out to the federated repositories of resources. The feedback then serves to refine that so it's not a human process but in terms of the gaps, that's the thing that requires a lot of help. So there's no caption, there's no description, there's no alt text for an image or this just really won't work because it's a simulation and I can't even control the keyboard directly, I'm using a switch. And there we have sort of three stages. One is we go to crowdsourcing sites like Universal Subtiling and TechSoup Global and the next stage is we link into, there's a ton of, the alternative format industry is quite huge and there's a lot of public services that provide alternative formats and do things like captions and descriptions. And if that, even that doesn't work, then we go to a series of private services and there it's actually linked into things like youth employment and employment in emerging markets. So now sustainability of flow, flow is linked first to a whole bunch of other things. It's this amorphous thing that is networked to all sorts of other areas and it's part of something called the global public inclusive infrastructure. It is also not, even that is not one thing. It's a whole network of additional projects. So flow is implemented in things like Access for All Ontario which is the Ontario public service where they have to educate 65,000 public service workers in compliance to X and they then contribute a whole bunch of things. It's applied in health education. It's applied in the power of it and sustainability of it is that these are all linked together and so all of the implementations reinforce each other and help to sustain each other. Thank you. Just want to follow up. Unfortunately lost the original question. Well you're gonna win the argument then. So this is great. Yeah, it's gonna be easy now. I just wanted to comment about the questions of COP and FERPA, HIPAA, data privacy that there are a lot of restrictions but primarily they restrict the flow from the formal system to the informal system. So many of us operate in the informal system and therefore these laws look very, very frustrating. I run a private website and I want kids to come onto my website and interact with my system and all of a sudden all these laws start restricting what or how I can collect these data. True, but there was the woman, I don't know her name, said, made a much more general statement which is these are full stop restrictions and that is I'm not speaking as an official of the federal government, I'm just saying in my experience and understanding of these laws that is false that teachers, she mentioned teachers absolutely are expected to and required to where they're in local parent and teaching kids as an official representative of a school system must track those data and keep track of where those students are, name, fully identified, home address, parental rights, all that stuff. Goes right along with the disability information, personalization information, grades, all that information whether they be three years old or 17 years old. So I just wanted to point out that while the informal side is certainly has a very legalistic boundary. So that's one, the formal side doesn't have those boundaries at all and in fact has requirements to keep track of these data. And then just sort of more generally on the research side there are research provisions in both HIPAA and FERPA to permit access to fully identified data for authorized researchers and authorization is regulated in various ways and I think we have one of those researchers in your panel who would know when and where they can access and what rules go along with that. And then finally there is a place where the informal education can connect in here which relates to when you become a service provider of a formal education institution. So if you are running a learning management system company and you get contracted and paid or in a contracted relationship where the liability flows down as a service provider to a school district then those fully identified data can flow to you when COPPA doesn't apply and you can keep track and do all kinds of great things with that school district on behalf of those students and that all works just fine. So I just wanted to give the nuance. I don't know if that's too much detail but hopefully helpful. I just wanted to sort of make a comment I suppose about the fact that in this area we found the power of openness sort of coming through again because we've had a series of projects looking at the way to make our material fit different accessibility requirements. And we used to find a huge problem actually in research projects in working with materials and getting permissions to use our own courses and making courses open has changed this. So actually we've now got a lot of open courses that can provide some of the experimental situation around this and I wonder whether you'd found the same that actually sort of the existence of open educational resources is almost essential when you're trying to produce lots of varieties when you're wanting to even subtitle things because we used to find it really difficult I'd say just for our getting permission for our own resources to be changed in this way and open resources just sort of made this field so different. I want to say something about that. So with the transformations in the open author tool that we've created I think we've found that in working with teachers on the ground we've been able to really promote open educational practices by even having these tools available in a staging environment to look at and think about as they're conceptualizing creating their own lesson plans and the way our professional development is set up teachers are sort of co-creating stuff but they wouldn't naturally be thinking about some of the things that the fellows have been talking and thinking about now which is like doing a collaborative subtitling project or thinking about these things from the base of their production process which is changing paradigms I think. So I have to, I've been given the power to call close to our meeting, feels very good. Anyway I just want to thank you I'm thanking myself too this is great. Actually they just signaled, yeah there's a class coming in. Everybody else goes through. Okay, yeah. Wow. So I want to address Patrick's question and as it relates to cost and there is an interesting set of data that OZERS collected on the amount of time and therefore budget that is spent in addressing digital rights management in the special education sector and it's huge I mean it could fund I don't know how many OER projects here. It is quite a huge amount of funding that is not intended to purchase rights not intended to license the resource but to determine how we can create derivatives that would make the resource accessible and if we were to, I mean the open resources you don't have to go through that process so if we could recover all of that money we would have quite a bit of budget for sort of. And I think this gets to, I mean we've experienced the same thing in our courses as well but I think it speaks to the importance of I mean we even sometimes talk about what is OER and these levels and the difference between being free versus genuinely open and if you don't have that actual creative commons for licensing like so we've used it in our course too like the remixing ability and republishing you know and when we didn't do that licensing initially how restrictive it is to be able to actually get sort of the power of the network and distribute it and stuff so. I'm gonna ask a different question Brad probably particularly of Dr. Reed which is sort of you were talking about sort of the universal design approach and sort of I feel there's a challenge which I know from my past in sort of intelligent tutoring systems and summarizes if you try and make too many versions you make one good version you can make a second good version but by the third one it's not as good somehow you sort of got tired and the poor person who the intelligent tutoring system decides to have the third or the fourth version might not have the effort into it and I'm wondering whether there's the same risk that actually by saying you sort of tune to the requirements of the learner that if the learner's requirements aren't well met by the versions you've got even though it's sort of the ideal in many respects if they're not well met and they'd be better off sort of struggling against in the others sort of how do you cope with that I suppose in your universal design system and maybe also in what you're doing with flow how can you sort of cope with actually you want an infinite number of versions of things and there's never the perfect ones I can answer it from the flow perspective there what we do is the third way in which we accommodate the needs of the individuals to replace and so addressing the same learning goals but with a different resource and the nice thing about OER is of course the huge diversity of learning approaches to a particular learning goal and so that's we're not sort of creating a lesser version of something of the defaults that we're actually looking at replacing as well All right that's a good answer Yeah I think that it's a common problem and I actually think that part of the ecosystems that are being developed need to be able to be explicit it's not there's a danger in actually starting to overestimate what our predictive powers are with these kind of models for example and when this learner starts to think that the system knows I actually think we could end up in a worse place than we started with almost like a cyber helplessness where even if we could adapt to you all the time you only end up being able to learn in this system and you're fully not capable of sometimes you don't have good matches and you've got to have strategies and skills to deal with that And to follow on that point I mean the key piece is the learning to learn it's self discovery, it's metacognition it's figuring out how you learn best and then making that request and so the system isn't sort of second guessing you it is addressing what you have discovered you are the way that you and your education team have decided you learn best Do we have any more questions on pornography? I was very envious of that earlier So do I get a call on end now? Alright, thank you very much