 I'm probably not going to do justice to his accomplishments, but here we go. Norman is a director of the Open Learning Initiative and the executive director of the Simon Initiative at Carnegie Mellon. He has spent his career at the intersection of learning and technology, working to expand access and improve the quality of education. His experience in higher education spends the two-year and four-year sector of public and private higher education systems, both domestic and international, and also at commercial institutions. But prior to joining OLI, he was a director of training and development at Iconica Inc., a CMU subsidiary that's shattered to deliver software development education throughout the international partner institution. He has taught computer sciences courses as an adjunct faculty at community colleges of Allegheny County and serve as a founding committee member of the Cook Honors College at Indiana University of Pennsylvania. He currently serves as a member of the board for the Shady Lane School and the next generation learning challenges funded cloud scope open course initiative. Please help me welcome Norman Vee up for our next keynote address. That was fantastic. So we had a lot of abuse for the suit from my friends in the audience. Some of you might have participated in a poll, we're voting on what I was going to wear for open ed. And if you're looking at the poll, it was a tie between Penguin's jersey and a political t-shirt. I got a little feedback on Twitter and the wardrobe poll was canceled, so I went with the suit. Right. So first, thank you all for being here. Thank you for the kind introduction. Before we get into the real talk, I've got a couple of things I wanted to talk about. It is an honor to be keynoting the open education conference. It's a privilege to be standing here and have this time with you. But I've been reflecting on just how remarkable it is for me personally to be standing on this stage, to be talking to this audience. I am very fortunate to lead the open learning initiative at CMU. When I joined the open learning initiative about a decade ago, the part of the initiative that I was most interested in was the learning piece. At that point in time, OLI was building some of the most exciting and most robust course wear in the world. And it was thrilling to be able to join an organization that was deeply focused on student learning, on improving it, and on really helping us understand how human beings learn. The open part wasn't so interesting to me. I tried to take a little bit of a look at what open education meant back in 2010. As best I could tell, it seemed like there was a half of a crowd who was really anxious to bring down the publishers. There was another half of the crowd that had an almost mystical belief in the power of an open license to enact learning. But I couldn't really see what this work had to do with the deep scientific work that I was interested in at OLI. Open education was not for me. So it's kind of remarkable to be standing here on this stage. But I was a part of the open community, at least in theory, and so I got dispatched with my colleague, John Renderly, to OpenEd 2011, where we were going to give a talk on learning analytics. Like a lot of the country in that fall, I was very concerned about what was happening in the economy, very concerned about what was going on with the Occupy movement, which had given me a lot of hope. And I walked into one of my first OpenEd keynotes, having spent the morning reading the news. And the news was that Occupy Oakland was being cleared out. They were being tear-gassed, protesters being shot. I was a little worried about this. And so I was a little bit concerned when, for my first open education keynote, I saw a guy jump out of a tent, go through an entire Occupy OpenEd shtick, move on to take a few shots at the earlier keynoter, who was actually the only person I knew at this conference and who I knew to be a pretty nice guy, and then move on to be relatively dismissive of learning analytics, the thing that I had shown up to talk about. I don't have a lot of memories from the rest of that OpenEd conference, I will confess. But I remember two common themes that were emerging from that keynote, things that I've now been hearing about for the past 10 years, that, you know, we're doing open education wrong, and in this case, I was doing open ed wrong, analytics were a problem. But also, what's been this ongoing tension between how we understand resources, how we integrate those resources into our instructional practice? What should we be doing with technology, and how do we sustain all of this work? I've been fighting with all of this stuff now for a decade. The other thing that I really remember from that conference was the Twitter back channel, which was mean, you know, snarky and snide, sometimes almost vicious. And I have to tell you, I'm not a nice person. Like if I say that the Twitter channel was mean, if that's what got my attention, something's going on there. Open education, not for me. So it's remarkable to be standing on this stage. So I went home, put my head down, went to get back to the work of trying to improve student learning. And at this point, the thing that we were focused on was a project called the Community College Open Learning Initiative, where we were going to work through and build out four new gatekeeper courses. We were collaborating with hundreds of faculty from community colleges around the country. And a funny thing happened. I kept finding that as we were trying to develop these new courses, put together this new courseware, any time we found material that had a Creative Commons license on it, it was actually a lot easier to integrate into our course. And the folks that were creating these materials were a lot more interested in talking with us and working with us on improving these things, finding better ways to make use of them. And it was interesting as well because I spent way too much time during this grant working with our general counsel's office, trying to get approvals and write sub-awards. And it turns out that Creative Commons license is almost magic once you get your OGC to understand it. It ends up removing friction. It ends up greasing the wheels. And so what I was finding in this work was that Open, in fact, was a prime driver for that thing that I really cared about, which was improving student learning. So anybody ever had a bad breakup? 2013 was a difficult year in OLI's history. In fact, my support staff, one of the Kim Wallenmeyer, anybody ever used OLI? A couple of you. If you've ever received an email back from the OLI help desk, you've received it from Kim Wallenmeyer, who will refer to this period in OLI's history only as that difficult summer. So during that difficult summer, we saw a really interesting breakout in the organization where some of my colleagues went out to try to take the methods that we were developing in OLI and see if they could be scaled in a more thoughtful way via commercial platform. Our founding director, my boss, Candace Till, headed off to the West Coast, to Stanford, to build out a sister OLI organization. And there was a real question, first on what I was going to do, because I spent the summer like Hamlet, himming and hawing, do I stay, do I go, and having decided to stay, there was a real question on whether there was a continuing need for the Open Learning Initiative, whether there was support for it, whether it was a thing that the community cared about. And I'll say that these were particularly difficult days for me, really a low point, personally and professionally. And it was in that period that members of the open education community really reached out and lifted me up. Told me that this work was important. And it's been from that period a commitment of mine to see that this open piece can play a key role in how we advance and improve learning. This community has become an exceptionally important thing to me. This conference has become honestly my favorite conference of the year. Look forward to it each year. And so it is remarkable to be standing on this stage with you. I'm keynoting the Open Education Conference because it's become so important to me. I am a sucker for public acknowledgments and I don't get the chance to make them that often. And so when I talk about the people that played such a key role back in 2013 and for the past 10 years, I was hoping that you could join me in thanking them. It's a long list. Not all of them are still here in the open education community. But for those of you that have played this key role in my work, key role in my own life, and this key role in open education back at CMU, thank you. Help me out. All right. Like a lot of you, I've spent a lot of time over the past few weeks thinking about community. I'm from Pittsburgh, lifelong Pittsburgh. We call ourselves Yinsers, not for us, the gentle yaw of the South or the use of the East Coast. Pittsburgh is a city that is defined by its geography. We're a city of three rivers and hills. And this means that we are a city connected by bridges, connected by tunnels. Pittsburgh is a city of neighborhoods. And I've been thinking about those neighborhoods a lot lately, how these different cultures in different distinct areas rub up against one another, how they're able to engage and maintain their own distinct identities while still coming together in some kind of cohesive whole. The view that you see there is actually the view from my office. The first nine years of my career at CMU, I had a windowless office. They just now gave me some space with some natural light. And when I look out at that view, I see some exciting things. So that tower off in the left, your left, that's the University of Pittsburgh. That is the cathedral of learning. And I love that language. I think it's language we don't use very often today, this notion that we're going to combine the secular and the sacred in that way. The center of St. Paul's Cathedral, Central Catholic High School. See if practice football field, where I sometimes lean out to give the Viking some advice as their training, which they don't appreciate. Way in the back, you see the Rode of Shalom synagogue, one of the largest synagogues in Pittsburgh. And on the right, you see WQED Studios, which is the home of Mr. Rogers Neighborhood. Love that view. And I love again this notion that on the one side, we have a cathedral of learning. On the other side, we have some of the most interesting children's educational programming being promoted secularly by a Methodist minister. Churches and synagogues in the middle. I like what we're doing with this combination of the sacred and the secular in Pittsburgh. You're gonna hear me talk a little bit more about Pittsburgh as I move on. Second thing I've been obsessing over that's going to show up. How many of you have read Walk Away? Couple of you? I am jealous for anyone whose hand is not up right now and that you get the chance to explore this book on your own. Corey Dockrow is a man that knows something about open. Goes something about open communities, knows something about the weird days that we interact and try to build things. And this is a really interesting and exciting novel and it really has felt applicable, especially over the past few weeks, as he explores these questions of what happens when we straddle this weird space between economies of abundance and economies of scarcity. But he also explores in a deep way, Coase's coordination problem, this question of how can we come together as a larger group to do things that are beyond the grasp of any one individual? How do we engage in superhuman work? Within Walk Away, we also see an ongoing theme in which Dockrow is misquoting a Scottish writer who in turn was misquoting a Canadian poet, over and over extolling his characters to work as if you live in the early days of a better nation. I've been thinking about that as I thought about open ed. Now ask, how many of you are here for your first open education conference or have been in the space less than a year? In fact, oh come on, stand up, stand up. Seriously, I'm on your, there are 450 of you here. For point of reference, back in 2011, there were only 300 people attending the open ed conference. The ability that you have as a group to impact and change and guide the future of open education is tremendous. You have to number us, isn't that fantastic? I hope that as you were standing, you looked around a little and identified some other first timers at this conference. And I hope that you'll seek each other out in the break. I hope that you'll share your stories of why you are here. I hope that you'll talk a little bit about what you hope open ed will be in 2030. And I hope that you'll take a little bit of time, just a little, to stay off of the Twitter back channel. If this is your first open ed, you don't have to fight. And begin thinking seriously about where you wanna take this community. So that was the prologue. Now we're into the talk. When the program committee asked me to speak, it was with the request that I talk a little bit about some of the work that's happening at CMU around technology, around data, and the ways that we're able to use data and technology to improve learning, to advance our understanding of how human beings learn. I have a usual shtick, if you've ever heard me speak, you've probably heard my talk about OLI and I like to spend a lot of time in that talk, diving in, talking about the data, talking about what we do. But I've had an increasing sense that the usual shtick was not getting it done. In part, because I've been looking at community, having community conversations around questions of what role should this data play? And I'm hearing an increasing distrust of it. I'm hearing an increasing set of conversations arguing that we really need to get back to simply trusting our intuition in teaching and that that should be good enough. So I've been trying to think about how, I can talk about data, how I talk about technology with all of you. I ended up in prepping for an earlier iteration of this talk, talking with my friend Michael Feldstein. Michael and Phil Hill gave a fabulous keynote back in 2015, almost prescient. Michael, Phil, you in here? Let's hear it for Michael and Phil. You guys should go back and re-hear their keynote. All right, so I'm talking with Michael Feldstein and I'm saying, you know, Michael, what am I gonna do? I'm gonna go in, I wanna talk about data. He says, Norman, you can't go in there, talk about learning analytics and learning factors analysis and linear regressions and p-values and blah, blah, blah. You need to go in and tell them a story. Okay, Michael, let's tell a story. But this can't just be a story about me, though I do love to talk about myself. This is gonna be a story about Sophia. It's also a story about Morgan and Morgan who love to sit beside each other in class and answer when the other's name is called. It's a story about Robin, it's a story about Madeleine, it's a story about Jiwon, it's a story about Aya. These are a few of the students that I've had the privilege to teach at Carnegie Mellon, introduction to computer science. But in talking about their story and in talking about the role that Open played in their education and in turn, the role that they're now gonna be playing in the future of Open, I do need to talk about this guy. Look at that handsome fellow. He has no idea what's coming for him. So 1993, I am headed off to a public university. I'm a first generation college student, Pell Grant recipient, so I'm part of that line down there on the bottom. But otherwise, pretty unremarkable kid, right? I am headed off to a decent public university called IUP as a lower class white American male with all of the privilege that entails and some of the lack of social awareness that you'd expect for that 18 year old. Dual major, English literature, and then because I wanted to do something practical, philosophy. Where are my English degrees in the crowd? Where are my philosophers? I think the English-lit guys have it. Reasonably confident in my leftist politics, as you can tell by the Lorax t-shirt, right? And so I was fairly confident when I walked into my first English-lit class taught by a man named Ken Wilson. And Ken, two or three days into class, I don't honestly remember what we were reading. I don't remember the larger discussion. I remember Ken Wilson asking, how many of you in class would call yourselves feminists? Hands went up, mine didn't. Okay, curiosity, how many of you would call yourselves feminists? Okay, Ken's next question. How many of you believe that women deserve equal rights? How many of you believe that we should be getting the same pay for the same job? Hands up. So my hand went up the second time and Ken just drilled me. Why was I possibly raising my hand the second time and not the first time? He planed me in front of the class. I remember it being humiliating, a little bit embarrassing, but I also remember it sending me home and letting me think that, how we identify ourselves, the causes that we choose matter. I have never been ashamed to call myself a feminist since. And with that perspective, I headed off to Carnegie Mellon, went to grad school and then was very fortunate to begin working with an organization called iCarnegi. As you heard earlier, iCarnegi was a spin-out from CMU, focused on delivering CMU-designed software development education to a large crowd. We ended up working with a lot of schools internationally. Their faculty worked to use our materials, almost a prototypical OLI without the open part. And I was fortunate in that work to be working for a man named Allen Fisher. Allen had been an associate dean back at Carnegie Mellon. And in his work in the mid-80s, working with Jane Margolis, sort of looked around and said, you know, there's something strange in the computer science department. Almost 90% of our undergraduate CS majors are men. And the 10% of women that we have coming into the class aren't lasting. We're losing them each year. Why is that? Allen and Jane spent years digging into this. And what was found was that there's an exceptionally large and complicated set of reasons that we're not attracting enough women into computing and that when they're there, we're not keeping them. Some of this is curricular. Some of these are the materials that we use. Some of these are preparation and a lot of it is cultural. I am proud for a lot of reasons to work at CMU. But one of the reasons that I'm most proud of it, one of the things that I love to point out is that after discovering this and after trying out some different things and experimenting over a 20-year period on Allen's work, our computer science department was able to move from 7% women to parody. We have a 50-50 class of women and men in computer science. I can't take credit for that. So Allen leaves CMU to start this organization called iCarNeggie and he's taking, he and Jane are working on this book, Unlucky Clubhouse, and we're gonna try to apply some of those same methods to the courseware that iCarNeggie was building. And so when we jumped in and reviewed our third course in the sequence and looked at learning activities, turned out that we had a whole sequence of learning activities that was biased against women. We were losing, women were not doing as well, and they were dropping out of our program. And Allen looked at it and said, well, I'm gonna see this, obviously, like the kind of examples I'm using, this is obviously gonna cause problems. So we changed the activities, sent them back out into the field, checked the data, and fixed the problem. This was revelatory to me. It was revelatory that we were able to engage in this kind of iterative improvement because it completely changed how I thought about course materials and instruction. It was revelatory to be able to work with these folks and see them taking these things they cared so deeply about and putting them down on the ground level, making them work. It was also revelatory because as a guy with two humanities degrees, I'd never seen statistics used in such an interesting and useful way. Really exciting. So fast forward about 10 years and the Open Learning Initiative received some funding to build out principles of computing. 15.110 is a course at Carnegie Mellon. It's our intro course. My dirty secret is that I love to teach intro. And so we began working to build out this introduction to computing course. It's a course you can take now inside of OLI. And it was exciting. We were trying to put into this course all of the best things that we knew about learning. We were able to bring together a diverse team of faculty. We were able to bring in instructional designers and learning engineers. We were able to bring in learning scientists, even brought in an anthropologist to study the process. Designers, software developers, even involving a few PhD students who were able to engage in some research. When we worked to build this course, we really worked hard. I was very fortunate then to be asked to teach the course. This was interesting because it was my first time teaching with the OLI software. Let me tell you, it's always scary to eat your own dog food. Learned a lot from that experience. But it was also a really exciting time for me to go back into the class and talk a little bit about these questions of culture, these questions of who we invite into computing. And so this is a thing that I really spend a lot of time talking to my students about. You walk out of beer's class, you're gonna learn how to program, but you're also gonna know who Grace Hopper was. Who was Grace Hopper? She found the bug, right? Everyone's, oh, Grace Hopper's found the computer bug. Everyone knows that. Maybe if you do a quick Google search, you find the glam shot, Grace Hopper. My students don't get to know Grace Hopper. They get to know this Grace Hopper. They get to know Rear Admiral Grace Hopper. Twice recalled to active service, promoted from the rank of Commodore. Amazing Grace Hopper for whom one of the only US warships named for a woman exists. First compiler, Grace Hopper was a giant. She didn't find a computer bug. And my students learn about the multitude of women that have been contributing to computing. Some of the contributions that were made in some of our world's darkest hours, the contributions that have been made in some of our greatest technological triumphs. And my students will learn about that weird drop that happened in the mid-80s, the role that culture plays in excluding people from our work. So my students will know that. Now you know it too. So I don't tell you all of this to virtue signal, not bragging, although I'm proud of this work. I'm telling you this is my bona fides. This is work that I care deeply about. This is work that I'm very enthusiastic about. And so when we started to dig into the data, when we started looking at the tail of the tape, though my students were having parody on grades, I was retaining them as well as we would like. And when I'm digging into student feedback, I'm not seeing real differences in the kind of feedback that I'm getting. But when we look at the activities, it turns out that some of those learning activities, the women in my class, still were not performing as well as the men were. So in some way, I was failing these students, despite the fact that we put in all this work, despite the fact that my intuition had given us the best set of courses that I thought we could have. This could be devastating, right? I mean, in some sense, it might be a thought of to be embarrassing for me to come up here and admit to you this failure. But what's exciting about this, what's most magical about it is that we don't have to stop at that. What we're gonna do is dive back in and we're going to fix these activities. We're going to cast a wider net to find better ways to get student voices into these activities. And we're going to send them out in the field and we're going to test them and we're going to see how they work again. And that's incredibly exciting to me. But what's more exciting is that the act of doing this research, working with one of our open-ed fellows, Steven Moore. Steven, are you out here? Hiding in the back. So together, Steve and I are just to take this analysis and use it for this individual course, but to build it out into a larger set of analytic tools so that we can take a look at all kinds of courseware and really try to key in on areas where we're showing bias against and for different kinds of demographics. And to me, that's an incredibly exciting and magical part of this work because the problem for most of us under representation in computer science, that's a huge problem. It's a huge challenge. Not necessarily something I can do a lot about as an individual. But this learning activity that I've written has some problems for women. I can solve that. And I think that if we're able to show this kind of information to a broader population, everyone else will recognize their own ability to solve that too. Why am I able to do this? Well, I'm able to do this because OLI is obsessive in its collection of data, forced and foremost. And because I have data that crosses many sections and many kinds of institutions, we're able to say useful and interesting things. So one way to say this, why I'm able to do this is because I've got the numbers. But a different version of this goes back to OLI's origin story. And OLI has one of my favorite origin stories. One of the best. So some of you may remember back in 2001, heady days of the dot com boom, lots and lots of excitement over the worldwide web and ways that it was going to change the world. It has changed the world, ways that aren't all good. Two program officers, Cathy Casually, Mike Smith, made a major bet in finding ways that this technology could be used to expand access. MIT's open courseware project was intended to take any kinds of course materials that faculty would share, lecture videos, quizzes, exams, notes, put them up on the web. And the amount of excitement that this generated at the time is hard to overstate. Tom Friedman articles in the New York Times. Really big, exciting things. And this is interesting in part because this is before a period that we would clearly be able to identify as open education existing, right? We have tremendous work happening with the open universities in Europe. We have some conversations happening around open software, but OER is a thing, the idea didn't exist yet. And so you almost see it being invented. So hot off of all this excitement, Mike and Cathy, head down to Pittsburgh, gather up some of our learning scientists, and say, hey, do you see all that great stuff that MIT's doing? Yeah, we see what MIT's doing. Wouldn't you like to do the same thing? No, no, we wouldn't. Not interesting to us. And so they left. Mike and Cathy happened to be visiting us on September 11th. And so by the time they got to the airport, no planes flying anywhere. I've heard since that there was some shared hurried discussion of trying to drive back to California. Cathy vetoed. And so they were stuck. They had to come back and talk to us. And we had a fairly unique opportunity to spend four days with two program officers who asked, okay, what is interesting to you? And the answer that they got was, look, we care about access. We have an access agenda ourselves, and we think that it's important. But what we really care about is effectiveness. What we'd really like to do is explore the ways that we can take advantage of these technological affordances to demonstrably enact learning and to find better ways to understand how human learning takes place. And he said, okay, that does sound interesting. And so with that investment, the Open Learning Initiative is born. And we're born with this mission to go out and design scientifically based online courseware, taking the best of what we know from the learning sciences, putting it into our courses, but recognizing that there are big gaps, that there are questions that we need to try to answer. And so we spend a lot of time trying to answer those questions. What's interesting is that in this moment, you see a tension that has been sticking with us for almost two decades. And it's that tension between access and effectiveness. But in this moment, you also see a different tension in that we start to glom together different communities, different neighborhoods, and we start to put them under a single banner. So sometimes showing up with different kinds of agendas. Back to OLI, one of the things that's special about OLI is that it presents an integrated view of learning. So we've got integrated courseware. And our design process is a thing that my friend Dale Pike calls learning design as hypothesis. This notion that when we talk about a scientific approach to course design, what we're saying is that deep down, we believe, we're making a hypothesis, that this set of learning activities will produce a certain learning outcome, will help students achieve a certain learning state. When I phrase it this way, kind of sounds like I'm experimenting on my students. How many of you are uncomfortable with the idea of experimenting on your students? Yeah, it sounds a little spooky, doesn't it? Except for the fact that every time you walk into a classroom, you're experimenting on your students. You're just not being very explicit in your hypotheses, friends. You're not always collecting the data that you need to, but we're all experimenting on our students. This approach ends up being deeply rooted in a larger history of learning science and cognitive psychology at CMU. And so when we talk about how OLI works, part of what we're working to do is try to understand what's happening as these knowledge states change. And to do that, we need to acknowledge that we can't see learning take place. It's happening inside of our brains. And so we're stuck building models of what we think is happening. And if we're going to do something useful with those models, we need to get them out of those students' heads. We need to be able to use these models to connect what's happening in learning science with the kinds of new instructional practices that we're trying to design. So this means, there we go, this means leveraging that science and the design of these experiences. It means using these models as we design new kinds of innovations. And it means instrumenting these experiences. Now often, you hear me talking about instrumentation and we assume that this must mean a technology, it must mean courseware, but we can definitely instrument our face-to-face experiences as well. And what you have in this instance, which is what half this diagram is an exciting feedback loop. We're able to take these data as they come in and use them to continuously improve these practices, to continuously refine our understanding of how these kinds of resources and innovations work. But what's also exciting is that as we push this back into the learning sciences, my colleagues are able to take this and advance our understanding of human learning. There's an incredible virtuous cycle that's achievable here. At CMU, we talk about this as a learning engineering approach, which is a phrase that's gonna get me some snarky remarks on Twitter. I can live with that. And we talk about it as engineering in part because we're an engineering school, but also in part because the work of an engineer in many ways is to build more robust systems that are failure tolerant. We must acknowledge that failures can take place and the kind of systems that we wanna build should be able to try to work around those. So when we talk about OLI, we're talking about a system of learning activities that's capturing different kinds of learner data, using these interactions to give feedback to students, to give really targeted hints, but we're also able to use these to give new kinds of feedback to our educators, things that they can use to change their classroom instruction. We use these data to iteratively improve our courses. And so this gives us the ability to look at different kinds of tools that we can put out into practice. This is an example of one of them, it's the learning dashboard that sits inside of OLI. What you're looking at is a set of estimates on how well students are achieving specific learning outcomes. Sitting underneath all of this, we have a complicated learning model, but from an instructor's point of view, it's very practical. I can jump in to see where are my students doing well in the green? Where are the areas that they're struggling? And from there I can dig in to try to understand what are the skills that are giving them trouble? What are the questions they're getting wrong? What kinds of misconceptions are they exhibiting? And then I can walk into class and change the kinds of instructional activities that I engage in. Now I just said that as though that's easy. I'm gonna waltz into class with some fresh instructional activities, right? The reality is that that's the hard part that we need to be studying more and better understanding how we can integrate these tools into our instructional practice. What does that mean? And we're not there yet. I am fortunate that OLI's original framing was as a research initiative because in the end, I can be really comfortable telling you there's a lot of stuff we don't know. We're still working on it. We have other tools that we're able to use. We're able to dive in to understand just how well situated our individual courses to giving us the kind of information that we need. As that information comes back, we're able to start to understand the underlying learning models and make some improvements. This is a tool that's part of the CMU toolkit called the learning curve analysis. Have any of you seen learning curves before? Couple of you? All right. So quickly, the idea behind a learning curve is that when you look at an aggregation of student attempts to solve a specific kind of problem, something that's really focused on an individual knowledge component, what you would expect is that the first time they try these problems, students should need a lot of help. Maybe they're asking for help. Maybe they're getting the questions wrong. But with each additional opportunity to solve these problems, you should see the amount of help they need to go down. So at some very basic level, what we're saying is that if you've got a sequence of problems and you can see that over time, students are able to solve these, you've probably got a pretty good learning model. And in fact, the one that you see up on the screen is a perfect learning curve. It's a thing of beauty. I'd like to tell you that every OLI course generates these beautiful learning curves straight off the bat. But the reality is that we often see learning curves that look like this. First one means that we're sort of wasting students' time. They already know these materials. The next two are interesting because they suggest that what we thought was a distinct skill probably has some extra stuff mixed in and we probably want to tease that out and see how we're supporting our learners. The last one I actually still haven't figured out yet and been looking at it for years. So these are some of the ways that we're able to dive in and understand what's happening with learning and how can we improve our courseware. But that ability to make these changes and to improve it ends up becoming so much simpler in an open space. And so this is another piece of work that we're engaging in. How do we actually start to build out better analytic tools that can be used by a broader audience? How do we take these data and make them actionable for a larger population of faculty and instructional designers? You'll note, or hopefully you'll note, that when I just talked about analytic systems, I'm explicitly talking about human and the loop systems. I'm talking about descriptive systems in which we are trying to support educators or trying to support students. I'm not talking about more predictive systems. These statistics are descriptive in nature. And I think that there's often a concern when we begin talking about this work that there's a danger of it becoming dehumanizing, that we're really trying to take faculty out of this loop that we're trying to hand robot tutors to our students. And I don't believe that that's the kind of work. I don't believe that these are the kinds of systems that we should be pursuing. They're not the kind of, one, we're just not in a space where these kinds of systems can be reliable. But two, they sort of miss the larger and important social aspects of learning, these pieces that we know about human connection. And so finding ways to build these types of analytic systems that not just maintain that human in the loop, but really are able to enhance the domain enterprise of education is important to us. To achieve that, it's gonna require a much larger and more diverse audience contributing to this work though. And so when we think about what the future is of learning materials, I think that part of that future must include these types of courseware systems that are able to give direct feedback to students, they're able to provide data back for iterative improvement, they're able to drive our larger understanding of how human beings learn. And I often hear an awful lot of hesitancy about these systems, right? Some of this goes back a very long time. We in education are naturally conservative and new technologies have been scaring us for a while. From the fadress, no one catches this. Socrates is bemoaning this new fangled writing thing, which is going to wreck the ways that students learn. So changes to our instructional practice scare us. But we also are justifiably concerned about new technologies because we've had decades of venture capitalists and eduponors showing up on our doorstep and insisting that they're going to disrupt learning, they're going to give us magic robot tutors in the sky, right? And so it's again, it's understandable that we're hesitant about these kinds of systems. But I would argue that we ignore these types of systems at our peril because we have pages and pages of evidence that these kinds of systems can improve student learning, can deepen it, can make it faster, can make it more robust. And if we ignore these types of systems, we will end up seeding the design and development of these instructional experiences to folks outside of the academy. Either this means that we will be condemning our students to resources that aren't quite as good, or it means that we're going to be forcing them to pay for things that are no longer open. So the systems are going to be out there. But I think that there's an additional complication in this in that when we talk about designing these systems, we are explicitly talking about designing instruction. And I would argue that the work of designing learning experiences, the work of designing instruction is explicitly the work of not-for-profit higher education. This is a core part of who we are. It's a core part of our work. And I think that we ignore it, we outsource it, leave our hands free at our peril, it's dangerous. And so I think that we really do need to take a better effort to claim this space, to engage more deeply with these types of systems. But this means claiming them as open systems, making sure that we can bring together open content, open algorithms. If we do that, though, we end up being able to ask some really interesting questions and solve some really interesting problems because it turns out that building these kinds of systems, or to be more specific, improving student learning is one of those superhuman tasks. It ends up being beyond the work of any single individual. How many of you design learning experiences? Good. Do you find it challenging work? You should, right? And one of the reasons is that with every instructional decision that you make, you have an awful lot of decisions that you need to make. How do we get started? With any individual intervention, do we start with the basics? Should we start with a more challenging understanding? What's best? We don't actually know. The answer is it depends. And depending on how you answer, you then need to answer some more questions. Focus, practice, distributed, maybe something in the middle. And we end up being able to work our way down a decision tree that becomes more and more complicated. And this only covers a few of these branches. So within any individual instructional decision, you have a tremendous amount of decisions to make. How can we possibly know what is best? Any guesses on how many decisions you have, how many options are in this space? It's heard a lot, a lot of fair. So some of my colleagues back at CMU have actually calculated this. There's a fantastic paper in science, instructional complexity, and the science to constrain it. Over 200 trillion options. How can we possibly make progress as individuals against this kind of complexity? The answer is we can't. Engaging with this kind of complexity, more deeply understanding how learning works requires a lot of things. It's gonna require us to make small, thoughtful changes to course materials. It's gonna require us to share the results of using these materials out in the world. I would argue that what it really requires is openness. It requires open materials. It requires open practices. It requires the kinds of transparency that we expect of one another and that we've seen generally from the community. I think that making progress in the learning sciences and improving learning ends up, by definition, being an open challenge. And I think that this is another piece that we need to think about what a larger and more coordinated system can look like. But when we think about those systems at scale, we end up really doing superhuman work. We end up really being able to better understand instructional context. And this to me is incredibly exciting. I love the idea that exploring learning, even for a single human being, is something that more than a single human being is needed for. That this requires superhuman effort. This is exciting work. Getting there then, solving this coordination puzzle requires some humans, but we can probably be helped by software. It requires us to have some consensus on how we wanna proceed, on what things are important. It's gonna require us to share and it's going to require collaboration. And we do need to acknowledge that if we're taking advantage of these kinds of software systems, the folks that are concerned have reason to be. We know that there are dangers. We see over and over in the news that as we start to implement algorithms and analytics, biases always seem to be creeping in, whether it's racial bias in healthcare, whether it's ongoing biases against socioeconomic status and education. We see this stuff creeping into our devices. I don't know how many of you saw the story recently about the soap dispenser that only worked for white people. And my friend Yuda talks a lot. How many of you know Yuda? She's not here, which is sad. I was hoping she'd be here. So this is actually her story. She talks a lot about a friend who uses a wheelchair and instead of using the wheelchair to head forward, instead uses the wheelchair to go backwards. The person's able to get a little more speed that way. We're doing a lot of testing in Toronto of autonomous vehicles and the algorithms that can drive autonomous vehicles. What do you think the autonomous car sees when they see a wheelchair? Which direction does that car expect it to travel? Forward, right? And so in a lot of tests, her friend is getting hit by the autonomous vehicles. Biases are going to creep into our software. Biases are going to creep into our tools. This isn't intentional. Nobody's trying to build a racist soap dispenser. I have a lot of friends working on autonomous vehicles. None of them set out to build robot murder taxis, right? But we need to acknowledge that as human beings, these biases do creep in. I mentioned earlier that part of my training as a software engineer was to recognize that some very basic level, human beings make mistakes. I could argue that it's the nature of humanity and so we need to build systems that are able to account for those kinds of mistakes. One way to do that is to refuse to use black box systems, right? We cannot use in our educational practice algorithms and approaches that we don't understand. In part, we shouldn't do that because we really can't trust them. There's a parlor game at CMU among PhD students of trying to figure out if they can reverse engineer closed algorithms for learning. Most of the time, it's not very hard. And there's a problem with these types of closed systems because that's not science. I think that as an open community, if we're going to take on the work of using and building these algorithms, we have a real advantage that our culture and practice of transparency can open up these algorithms in a way that lets us acknowledge that they're not going to be perfect, but also lets us borrow from the old Linux joke that with enough eyeballs, all bugs are shallow. I would argue that with enough eyeballs, all of our biases are going to be shallow. That what we need to do is attract a larger and more diverse community who are able to interrogate these algorithms and understand what's happening. We can't afford to reject these systems. We need to embrace them, but we need to embrace them on our terms. And we need to recognize that our terms can include things like thoughtful and ethical data use. I'm not going to spend too much time on this because I'm already running long, but for those of you that are interested in this question of how can we collect data and use it in research to drive these algorithms in ways that are ethical. There have been a lot of folks spending a lot of time on this, and I'd recommend you check out this work that happened at Asilomar. Pretty exciting, and which is work that's still in progress. If you've ever heard of talk from OLI, you've seen this quote. Herb Simon was a Nobel Laureate at CMU who left the work that he was doing in economics to eventually end up really deeply focused on learning and education. His sense was that if we're going fundamentally improve education, we need to begin treating it as research. It needs to be the, then this research needs to be work that all of us undertake, right? With the same seriousness and the same respect that we undertake research in our core domains. And he believed that this was work that had to happen in a larger community. And so in some ways, I hope that you'll hear this talk and you'll think about it as an invitation to join that larger community. Because what we need to make this work are larger, shared, ethical and open systems. We need to build infrastructure that's going to support these kinds of efforts. We need to be able to find ways to safely but also ethically share data. We do know how to share content, getting good at it, but we need to be a little more thoughtful and a little more trusting in sharing the results of the use of that content. And on the whole, we need to take on the work of open science. So what are we doing at CMU to try to support this vision? I'm really proud that earlier this summer we announced the release of the open Simon toolkit. Carnegie Mellon has taken some of its best tools for learning science, for instructional design, for delivery and we've open sourced them. It was a lot of work to bring them together but we've put them out there in the field. We're hoping that others will jump in and take advantage of this work and we're hoping that in making this a larger, more open ecosystem, others will begin to plug in their own tools and approaches. This lets us do some really cool things when this whole vision of an open system using open tools works. I'm incredibly excited to be collaborating with colleagues at the University of New Hampshire who are building out a set of open modules intended to simultaneously teach students about cognition while also giving them the tools to learn better. Metacognition is really exciting. I'm excited to be seeing work happening at CMU around how we can teach core competencies, collaboration, better writing skills, better communication skills, better conflict management skills and how these kinds of experiences can change the larger learning experience because our early research suggests that students that are exposed to this kind of training and this kind of education end up reporting a much more positive college experience, which is pretty exciting. I'm incredibly proud of the work that some of my colleagues are doing. Amy Ogan, Judo Tordicino, in understanding what we do when we take these technology-enhanced learning tools and start inserting them into different educational contexts. This is a screenshot from one of my favorite keynotes which we don't have the time to dive into but if you head to my Twitter stream, a link to it is pinned to the top. When you have 40 minutes, this is another way for you to spend your time. And I'm incredibly excited to be working with colleagues at Santa Ana College. Crystal Jenkins, Crystal, are you here? Let's hear it for Crystal. Where we are fundamentally asking how do we more deeply involve students in this work? How do I bring those student voices into understanding where these models are failing? How do I encourage a larger learner population to help us address areas that data has identified as deficient in our courseware? Really exciting stuff. And so what's needed in this space are more tools and infrastructure. But what's also needed are different kinds of social norms, different kinds of commitments, commitments to one another, commitments to the work. It also requires an awful lot of intellectual honesty, a little bit of humility, and I think a willingness to let our minds be convinced and changed by evidence. And I think that that work together can help us go forth and live as though we're in the early days of a better nation. So I hesitated to include this last part of my talk. It was something that came to me months ago when I saw the date of this talk. The past few weeks seemed to make it a little more rot, risky, so I tend to brood on things. Wandering in the house, I'm brooding, I'm brooding, I'm brooding, and my 13-year-old son said, what is your problem, Dad? What is wrong with you? Well, I'm worried about this thing. It seems really risky. Okay, Boomer, then he came back and he said, why are you worried about something that's risky, Dad? Be brave, then he walked up the stairs. When your 13-year-old teaches you to be brave, started stuck doing some things. So I wanna talk about something that happened in Pittsburgh a year ago. I mentioned to you that Pittsburgh is a city of neighborhoods. The Squirrel Hill neighborhood is a neighborhood that's at the center of our city, geographically, but also socially and spiritually. Squirrel Hill has one of the largest Orthodox Jewish populations in the US, and Squirrel Hill is a central spot inside of the Pittsburgh ecosystem. I told you earlier that we call each other neighbors, but Squirrel Hill is literally the inspiration for Mr. Rogers' neighborhood. Squirrel Hill's an incredibly important place. And a year ago, within this lovely neighborhood, we had one of the worst acts of anti-Semitic violence to hit the United States. Some of you probably learned about this from the news. I learned about it about an hour before it hit the news when my then 12-year-old walked downstairs and said, Dad, I just got a text from David. David is the drummer in my son's band. David says he thinks our concert's canceled today because there's SWAT guys outside telling them they have to stay inside the house. And so we began getting alerts from Carnegie Mellon. CMU was only a mile from the Tree of Life synagogue. We began making phone calls, and I hadn't realized until that moment how many of my staff and colleagues live within five blocks of the Tree of Life. I associate this with the open education space in part because we heard about the shooting, went and attended a visual, and the next day I jumped on a plane and spent some time with my colleagues at Lumen. And I ended up meeting on that one day to call in a staff meeting over Zoom, which was deeply painful, challenging. But I also want to talk about this because as I told you, I find it difficult to not acknowledge things publicly when I have the opportunity. And roughly a year ago, 11 of my neighbors were murdered. So this is Cecil and David. This is Rose. This is Richard. This is Melvin. This is Joyce, who was a tremendous learning researcher. This is Jerry. This is Irving. This is Daniel. Sylvan and Bernice. These were my neighbors. And when we talk about what's happened to those people, we can talk about the weird American gun thing, which is too big of a problem for me to tackle. And when you talk about humanity's weird anti-Semitic thing, but that's also too big of a problem for me to tackle. And so what I do want to note is the weird and strange role that electronic communications played in inspiring the shooter to increasingly up his own rage. These people that he saw as apparently being too welcoming of immigrants, folks that he believed were taking his jobs. And when you go back and you look at this message chain, what you see are a lot of small steps that eventually lead to tragedy. See a lot of misinformation. And it leaves me asking whether we as human beings were actually ever intended to engage electronically. I'm increasingly convinced that any kind of electronic communication, if it's the only way that we're going to engage is going to end up becoming problematic. I think that we're at primates that we're really intended to see and touch and smell one another. How many of you are familiar with the digital polarization project? All right, well, now you've all heard of it. I think that this is some of the most important work that is happening in the open education space. I hope that you go out and take a look at it, see how you can use some of these open materials to help your students better understand how they can find truth, how they can engage with this crazy stuff that happens on the internet, and how they can contribute back into this space, how they can contribute their own truths as well. And I think that as a community, we would do well to maybe take this course ourselves to think a little bit about how we are engaging or what kind of information we are spreading to one another through these electronic media. Last quick bit, and with one more, Corey Doctorow quote, one that I love because when we talk about the systems that we're displacing and have no doubt that in open education we aren't trying to displace systems, we can often lose sight of the larger picture that as we're getting caught up in our own fights about hierarchy, our own fights about who's in charge, we lose the fact that we are actually displacing much larger and angrier hierarchies in communications. And so the past few weeks, I think, have been difficult and challenging, and in some ways, these are the stakes for the kinds of work that we're doing. It doesn't mean, however, that we need to accept this. We're able to do a little better with these feuds and these blood sports. I am deeply thankful that the program committee asked me to come and speak today. It's really been a privilege and an honor, but I wanna really acknowledge the hard work that that program committee has done. I was speaking last week with a member of the programming committee from last year who told me that it was one of the most thankless jobs they'd ever done, that they basically spent time receiving criticism and never any praise, and that was last year's committee. Any of you that have been engaged for the past few weeks know that these folks have been in for an awful lot of criticism, and yet what they have done has put together an incredible and cohesive program for all of you. It's work that I am excited to be a part of. They've reviewed over 400 presentations, and so I hope that you can join me in thanking the program committee for this tremendous amount of work. Thank you, T'Kahli, thank you, Matthew, Amy, Tanya. Thank you, Regina, thank you, John. Thank you, Christina, thank you, Kelsey. Thanks, Brandon, have to harass Brandon. Thank you, Lisa, thank you, Minya, and thank you, David. And with that, go forth and see the great work that that program committee has done. Go forth and see the great work that all of you are doing. Go do some good stuff together. Thank you.