 Followed by a catered drinks reception with live music. And we hope very much you will join us for an evening of celebration. And also on display today here on the first floor, many of you have already visited the FEM EdTech Quilt. So please do continue to make the most of this very unique opportunity to have a look at what that community project looks like in person. There's no other housekeeping announcements. So I am going to hand over to Natalie, our chair for the session. Please put your hands together and say welcome. Thanks very much and good afternoon everyone. I think a significant piece of work that the alt community has really done over the past couple of years is developing the ethical framework for educational technology. It's a really important piece of work. It's a very timely piece of work. I think many of us started off as very much techno evangelist, but increasingly becoming maybe more cautious about some of the darker sides of technology. And so it's a really critical piece of work that I think has much wider application. It was super to see the first alt award for the case study of the use of the ethical framework yesterday, which University of Falmouth won. So I'm really delighted today that today for our keynote, we've got Rob Farrow from the Open University. Rob is a philosopher by background, but is involved very much in interdisciplinary research and also learning technologies. And I'm sure he's going to tell us a little bit more about himself, but also he's really going to explore and unpick some of those ethical aspects around educational technology. So if we could perhaps join hands and give Rob a warm welcome. Thank you, Rob. Hi, everyone. Great to be with you today. Thanks for your patience while we saw it out the classic technical issues. You have to go to the Association of Learning Technologies Conference to really get the high quality technical problems. So it's a great pleasure to be with you today and to be invited to talk to you about ethics and educational technology. I'm aware of time, so I just want to kind of crack on really. Here's a brief overview. There's only seven chapters of what I'm going to talk about. First of all, just a very brief introduction to myself, my background and the kind of stuff that I work on. And then the framework for ethical learning technologies is going to be a kind of bookend at the beginning and end of the rest of the presentation. I'm going to take you through a bit of a kind of breakdown of different perspectives on ethics. I'm not presenting myself as an ethical expert per se, and certainly not a sort of model human. I'm sure there's plenty of people here with quite a lot to teach me about being ethical. But what I do want to do is try to convey some of the landscape around ethics and maybe different ways of approaching ethical questions that might be relevant to your work. And then I'm going to move more into talking about sort of contemporary issues around ed tech as I see them. Bearing in mind that I'm very much an ivory tower kind of recluse. You're the ones actually doing this stuff on the front line. And so it's a little bit removed maybe from daily practice. I'm not running a veerly in a higher education or further education institution. But I do have a kind of unique overlap in my disciplinary backgrounds, which hopefully you'll find some of it interesting or insightful. So first thing about me. I see a few familiar faces here today, which is always good, and especially after the last couple of years. For those who don't know me, I'm a senior research fellow in the Institute of Educational Technology at the Open University. And I've been there since 2009, where I was just in the final stages of growing up my PhD. And I actually spent about 10 years as a full-time philosophy student. So bachelor's degree, master's degree and a PhD. But since starting at the Open University, I've been moving more in the direction of educational technology, especially open education. Part of a team in the Institute of Educational Technology that works almost exclusively on open education projects. And also in my work, I tried to explore ideas around openness. What does it mean to be open? What are the values associated with openness and that kind of thing? We also just do kind of mixed method research, evaluation, all that sort of stuff, and supervised doctoral research. And you can see here on the right-hand side of the screen some of the projects that I've been involved with. The two big ones at the moment are Encore, which is the European network for catalyzing open resources and education, and GoGN, which you may have heard about already at the conference, the global OER graduate network. So that's where I'm coming from. I'm doing research for a living, but I have a background in philosophy which includes quite a lot of interest in ethics. So there's a kind of overlap with all this stuff. And turning now to the framework. So, obviously, everyone hopefully is familiar with this by now. It's been talked about yesterday and today. And I'm going to kind of explain why I think there's a real need for this kind of framework in educational technology. Obviously, the four quadrants, awareness, professionalism, care and community and values, I'm going to come back to those at the end and kind of just explore them a little bit in relation to the rest of my presentation. But I want to start off by saying just how this came about from my point of view. Obviously, there was a working group of alts, it's very much an alts initiative. I'm actually a journey come lately to the whole thing because I didn't get involved at all, pretty close to the end. And my role was really to try to shape very diverse contributions from the members. Really reflecting all their own kind of diversity, the different roles they have, the different perspectives they have. And to try and sort of shape that and give it some kind of structure. So, I just want to be very sort of clear that I don't consider the framework to be my work, right? It's the work of this community. My role was really to help try and shape it and present it in a way that would be kind of accessible and make sense and so on. I also consider it to be an ongoing endeavor as I'll return to at the end. So, to start off with, I'm gonna give you three different perspectives on ethics. But I want you to sort of appreciate, I suppose, that they're not mutually exclusive. They can be compatible with one another, but they've got different characteristics. So, please bear that in mind as I go through these three different things. And while we're on caveats, obviously there's so much to say about ethics as a field, right? And I have to cut a few corners and oversimplify a few things to kind of cover the territory that I wanna cover in the time available. So, please don't jump on me on Twitter if I just like skip something or you've got a cool ethics perspective and I didn't mention it and that kind of thing because I'm just trying to find a route through this that makes sense in the time that we've got. So, traditional ethics, by which I mean philosophical ethics in the Western tradition, essentially. The key thing here, I think, with ethics is that from a philosophical point of view, it's really about being systematic in the way that you apply judgments about different ethical or moral scenarios. So, systematicity is the kind of characteristic feature of ethics in philosophy. And I think you can kind of characterize ethics as starting from something like this position. What I mean by that is you're already doing it, right? You're already an ethical being. You have a sense of right and wrong. You may have cultural or religious backgrounds or commitments that inform that. You might have psychological aspects that inform that but everyone's already moral. Everyone's already doing this. And when we do philosophy around it, essentially what we're trying to do is make sense of those sentiments, not to try and start from a blank page, write down an argument and say you must be like this if you're going to live a good life sort of thing. So the starting point is already people already living, already being ethical in their own ways. That's not to say that people agree necessarily about right and wrong, but it's that the starting point is already there, already doing something. When you talk about philosophy to people who don't really have that much interest in it potentially, they sometimes think of this kind of thing, right? Bunch of white dudes in togas hanging around, got a bit of time on your hands maybe, kind of indulging every intellectual curiosity and so on. And you're kind of right, there's no togas really anymore, but there's still some similarities there. And I like this picture, this is Raphael, the school of Athens, because it's one of those kind of Renaissance pictures where it has lots of hidden meanings and things encoded into it. You can see Raphael actually if you were very eagle eyed, peeping back at you from one corner. And so there's all these kind of hidden messages in there, but I just wanted to draw your attention to one thing in particular. So this is Plato on the left and Aristotle on the right, who are two of the kind of ancient Greek philosophers who've been extremely influential. And you can see Plato has got the gray beard, his robes are made, are colored, the colors of air and fire, and he points upwards to the transcendent world. His idea was that truth isn't found in this world, it's found in the world of forms, a transcendent place where a reason reveals what's going on. And Aristotle holding his ethics, he's got brown and blue robes, earth and water, and he gestures around to the material world. He's one of the first kind of scientific outlooks, if you like, saying that actually, this is where knowledge comes from, it's the world in front of us. And when I was preparing this, I kind of kept coming back to this idea of visibility and invisibility from different kind of angles. So I kind of returned to that theme as we go on. But in some ways, a lot of what there is to say about philosophy is kind of just in that picture, it's quite cool in a way. Alfred North Whitehead famously said, I think with his tongue in his cheek, that the safest general characterization of the European philosophical tradition is that it consists of a series of footnotes to Plato. And obviously there's been a lot since Plato going on and he knows that. But what you can say about the relationship with ancient philosophy is that the kind of domain of philosophical inquiry hasn't really changed that much since then. The fundamental question is that philosophy attempts to address haven't changed that much since then. And so you might say the three main areas or domains of this, metaphysics, what is the fundamental nature of reality? Epistemology, how do we know things? What is knowledge? And ethics, how should we live and what kind of values should we have? And concentrating just on the ethics part. And this is a sort of modern take now rather than an ancient take. Philosophers normally say the ethics is kind of subdivided and you have metaethics, which is really about clarifying the language we use when we talk about ethics. Normative ethics, which is about the standards and principles that we arrive at so that we can consistently apply them when we systematize it. And within there, the three main positions that people talk about are deontological ethics. There's a few big words, but it's not as complicated as it sounds. Deontological ethics is really about what kind of duties and rights and obligations do we have. Consequentialist approaches emphasize the outcomes of different decisions instead. And virtue ethics is all about personal qualities, personal development, excellence of different kinds and so on. And then the third main area would be applied ethics. So what happens when you apply all that stuff to real scenarios and real kind of moral controversies and that kind of thing. And when I was working in ed tech at first and there's people with very, very different disciplinary backgrounds and that's one of the cool things about ed tech is you can come from quite different perspectives on it. But people were not necessarily when they were doing ethics coming at it from the way that I'd been taught to do it and trained to do it. So some of the work that I did around this was to try to articulate some of these basic positions in a way that people could apply themselves. This is really the kind of stuff that you would do if you were taking an undergraduate course in philosophical ethics, you would learn how to do this sort of stuff. And so I was kind of interested in saying, how could you create a tool that would let people do that for themselves? And so some of the stuff that I've tried to do is to use this kind of breakdown that explains the sort of relative advantages and disadvantages of different approaches and then to sort of make them succinct and put them into a framework. So this is the framework from a paper that I wrote a few years ago. And the idea here was really that people are increasingly doing things outside of institutions and they are creating learning experiences outside of institutions. They're interacting with people outside of institutions. And if that's the case and you don't have the kind of ethics checks that you would go through normally, what kind of guidance can you give people? What kind of tools can you give people to help them to try and make sense of that? So that was really what motivated this work. And on the left-hand side, those criteria that you would kind of cross-reference against these normative ethics positions, these come from looking at the ethics guidelines published by people like the ESRC, BPS, and BERRA about what they think is important. And the reason why there's a sort of consistency across the different ethics guidelines is that they all have a common genealogy. So they all go back to really, essentially the aftermath of World War II when you get the first kind of codes of practice for research ethics emerging. And up to, I think it was 1978, something like the Belmont Report formalizes this and then they all kind of build from there. So that's one way to get into applying this stuff is through these kind of ethics codes, these kind of frameworks. And that's really important, right? So that's a way that these kind of philosophical debates have been codified into things that can guide practice. This is an example of an ethics checklist. This is from the UKRI website. It looks a bit janky because I had to kind of screen crop it and kind of make it fit on the page. But what I would note about this is that this is an indispensable part of practice, right? For anyone who's doing anything that affects people, you have to be aware of the things that you might do. But at the same time, a checklist-based approach can make it seem like it's just a series of logic dates that you have to do. It's just a form that you have to fill in so that you can then put that to one side and carry on with the thing that you were trying to do in the first place. And you know, it'll be like, is there going to be bodily fluid? Yes or no? Is anyone gonna be on drugs? Yes or no? And you kind of do this thing, fill the form in and then off it goes and you can kind of move on from it. And I think I wanna encourage you to not see ethics like that, to not see it as just a moment in time or an administrative process or a checklist. It has to be something that is kind of continual and ongoing and reflective. So that's the first approach, if you like, the sort of philosophical, traditional philosophical approach and how it gets used in these kind of checklists and codes and ways of being guided by councils and that kind of thing. A second kind of perspective, I guess, to explore, I'm calling critique of ideology. If we return to this picture and sort of take a different route from here and focus on the you part, I think if you look at the history of philosophy and how ideas about subjectivity change over time, it's quite interesting. And I sort of like this picture. This is the picture from Wikipedia to represent Cartesian subjectivity. And what Cartesian subjectivity means is Cartesian comes from the work of Descartes. And Descartes had an idea about what it is to be, not even necessarily a person, a mind. So Descartes is known for this process of doubting his own existence and doubting everything in the world and doubting all kinds of different things. This is the method associated with Descartes. And he's doing it because he wants to arrive at a true form of knowledge. And he ends up with the idea that the only thing that I can't doubt through this process is that I am thinking. I'm having thoughts. Therefore I am a thinking thing. And you can still be skeptical about whether you've got a body or not in Descartes. And for this reason, Descartes' name is a dualist. You have a mind and a body and they're separate. They're not the same thing. Lots of philosophical things follow from this. It's not relevant to right now. The part of what Descartes put forward was the idea that you have a kind of unmediated, transparent access to the contents of your own mind. So you know your own mind. You're a bit like the guy in the head, watching the movie unfold of your existence, right? And no one else can see in there. That's for you only. This picture does sort of, this is not a, you know, this is a strange picture. This is like the numbskulls. You know, remember the old comic book? The question is, what's going on in that guy's head? Because is it this again? I'm just an endless aggress? Who knows? Anyway, the point is Cartesian ideas about subjectivity just mean that you have your own mind and there's nothing in there that you know, you can't access. So then you get characters a bit later on, like Marx, Nietzsche and Freud. They didn't call themselves the masters of suspicion. That was a label applied to them later by Rikur. Although I would pay good money to see a Netflix show or something where they run a Victorian detective agency or something like that called Masters of Suspicion. Not one season though. It needs at least six seasons. But the point with these guys is, in different ways they challenge this idea that you have transparent access to your own mind. So Marx, there's the idea of economic structures, Freud, the unconscious, Nietzsche, the will, the irrational, pretending to be rational, that kind of thing. So you have this idea that maybe there are things influencing us that we're not really necessarily aware of. I did my doctoral work on the Frankfurt School, which is an interdisciplinary approach, very interested in ideas around domination and emancipation. And they sort of drew on lots of different areas. Not always successfully maybe, right? As a piece of intellectual history, it's very, very interesting what they were trying to do. They ended up being quite pessimistic about technology. And to give some background here, they were dissidents in Nazi Germany who fled as refugees to America. And so a lot of what influenced what they were trying to do was to try and understand how did fascism arise? Why did we not end up if we're in this process where we've had an enlightenment? And we'd have science and we're progressing as society is developing. How are we ending up going into this state of fascism and totalitarianism? Both on the left and the right. They were critical of pretty much everything. And one of the key texts here is dialectic of enlightenment by Hawkeimer and Adorno. And they really kind of set out this idea that when we use technology, we're kind of setting ourselves up to be kind of imprisoned in a way by our own success. We develop systems that ultimately dominate and oppress us. And they call this instrumental rationality where you see everything as a means to an end. And the whole world is just there to be treated as a resource and so on. Obviously you can go into a lot of detail around this. I'm not gonna go into any more than that. But I would like to just get you to reflect quickly on the idea of progress and technology. Technological progress. And here I've got two quotes to compare. On the left here, you have Martin Luther King. And he says, deep in my heart, I do believe we shall overcome. And I believe it because somehow the arc of the moral universe is long, but it bends towards justice. So given enough time, things move in the right direction is how I would interpret this. Given enough time, we make moral progress. We get better. And I'm not saying that's wrong. If you think about the extent of human history and Stephen Pinker's written a book on this where the amount of violence the average person is subjected to is decreasing over time, for instance. But to compare it, someone like Adorno will say, no universal history leads from savagery to humanitarianism, but there is one leading from the slingshot to the megaton bomb. As I said, he's known for his pessimism, right? But the point is, yeah, we make progress technologically. We're always extending our technical powers, but we're not necessarily making moral progress at the same time. Doesn't really work like that, morality. And you might think, actually, I do think we're making progress, and that's fair enough, right? I'm just putting it there to sort of give you a prompt to think about. But if you think about this idea of progress, look at what's happening at the moment in the United States, overturning Roe versus Wade, something that was considered to be quite a big progressive achievement for a long time, now reversed. A lot of people will say that is not progress. Other people will say that is progress, right? If you were a Christian fundamentalist, for instance, you might think, great, we want to see more of this kind of progress, something like that. So you can have different ideas about justice and different ideas about what's right in the great scheme of things. I would say we can't rely on it as a given. Another important text from this Frankfurt School tradition is One-Dimensional Man, which is really an analysis of industrialization and capitalism and how people can essentially lose their humanity as they get subsumed into these bigger systems. And Markuza, who wrote this, was very interested in this idea of how do you retain the possibility for critical thinking and critical approaches when the systems around you are foreclosing these possibilities? And again, it's a kind of pessimistic take. I think it's an important take and it's worth being aware of. Around the same sort of time, you're getting the same sort of themes emerging in things like emancipatory pedagogy. So Freira's pedagogy of the press, for instance. Very, very similar set of assumptions about the sort of problem, if you like, very similar diagnostic. But here focused on how do we structure pedagogies in such a way that we can counter that at the same time? How can we make things less mechanistic? How can we make them more like a dialogue? And I think, you know, there's still so much interest in Freira's work for good reason. I think it's really important to have this angle on what education does and what education is. The 1960s was quite an interesting time for all this stuff. At the same time, roughly, in Stanford, they were doing some pretty amazing stuff. I don't know how many people have ever encountered this before, the mother of all demos. Few hands, right? If you're a learning technologist, or anyone who works with technology in an office or anything like that, check out this YouTube stuff, right? It's amazing what they were doing in the 60s. So in these demonstrations, they were going, right, we're gonna do telecommunications. We're gonna have people working on the same document in real time. We're gonna do stuff with a mouse. Here's a thing called a mouse, you know? And all of it is there, right? In one big bundle, version control, cut and paste, hyperlinks, video conferencing, what you see is what you get. 1968, it's pretty amazing. And in some ways, that kind of model, we're still using, right? In lots of ways, lots of people's jobs became this in the pandemic, right? This is like a Zoom call plus Microsoft Office. This is Teams or something like that in 1968. Pretty amazing, I think. If I have a clip here, I wonder if I can play it. So by the 1990s, right? So this is just short of 30 years later. Let's see if it plays. Do we have sound? No, is there a volume I can change? This clip was amazing, if you've seen it. All right, I'll just describe it. Basically, it's all about the possibilities for the internet. They don't even call it the internet, right? It's the information superhighway. And this is the idea of everyone being connected and you can access any piece of art from anywhere in the world and any time and this kind of thing. And she goes on to send an email to President Clinton and gets one back saying, hey, thanks for using the internet or something like that. And then they throw a bit of shade at John Major and say, we can't even send an email to John Major because he hasn't got a motor. And it's like, yeah, things have changed quite a lot, right? But in a way, this sort of 90s, this point in the 90s, you had the kind of like end of history idea, right? Liberal democracy is triumphed and now we get to just kind of live well, moving into the millennium and this kind of thing. And the internet being a big part of that. But also, if you remember using the internet in the 90s and I didn't really use it till I went to university, which was 1997, it was quite a different experience, right? When it was just hit, I've got a website, here's my homepage and there's no social media, no platform. I haven't got one, at least nothing beyond the usual thing of leaving it to market forces. Perfect timing. So yeah, anyway, the 1990s had quite a different vibe, right? When it came to the internet and what I would note about it is that there was not really much going on that was invisible, whereas now we have a lot of stuff going on where it's like, what is actually happening on the internet, right? What is actually happening with data flowing around? What we, you know, if you have a look at the trackers or the cookies and these kind of things on different pages, it's pretty scary and there's no real way to escape it at all, right? Even if you opt out of everything, they'll still track you, they'll still make a shadow profile and so on. So bit of a change on a related note, this is from 1999 and this is like a bit of blurb in a book, like I couldn't really trace it, it's just a picture that was going around online. About Google, Google is a pure search engine. No weather, no news feeds, no links to sponsors, no ads, no distractions, no portal later. Nothing but a fast loading search site. I guess at the time, the main competitor was like Yahoo or something and Yahoo was kind of a bloated mess. But again, pretty different, right? And the whole kind of evolution of Google is quite interesting in this respect. This is from 2004, our search results are the best we know how to produce. They are unbiased and objective. We do not accept payment for them or for inclusion or more frequent updating. We display advertising, but we label it clearly. And yeah, that's not really true anymore, right? They drop that. They also drop don't be evil, right? If you remember, don't be evil. Seemed like something that was a pretty, you know, small thing, yeah, don't be evil, right? Shouldn't be controversial. But I guess it stood in the way of something that they wanted to do. So no more don't be evil. And don't be evil was kind of closer to the spirit of the original kind of dot-com boom and stuff like that, I think. Foucault was mentioned in a presentation this morning and I want to mention him as well. He's not a Frankfurt school thinker, but he's sometimes mentioned in the same breath because he's interested in the same sort of things and he was living in the same sort of time. And Foucault's idea, you know, analysis of the panopticon, which is a prison design with a central observation tower. So all the prisoners, you know, they basically regulate their own behavior because they don't know when they're being watched. And some people have applied this same idea now to things like the internet and social media and sort of used it to put together the idea that we're suffering from sort of reduction of our freedom, a reduction of our autonomy and our democracy. So that's the second perspective, if you like, ideology critique or sort of looking at what's hidden and things that affect our subjectivity sort of blow the surface. The third category I want to present, I'm calling the ethics of care. And again, you know, you can take your starting point as, you know, your sort of lived experiences. With the ethics of care perspective, the main differences compared to philosophical ethics is that in philosophical ethics, that's what you're trying to do is almost erase your own subjectivity and your own feelings and your own bias from analyzing ethics and moral situations. So when you focus on a care perspective, you start off with the idea that care is the primary thing to start with ethics. And so instead of thinking about philosophical arguments, you focus on interpersonal relationships. And this is sometimes closely associated with sort of feminist ethics. It's not quite the same thing. You can have a feminist ethics from a traditional point of view, traditional philosophical point of view, but there is a lot of overlap. And this kind of foundation, this difference in foundation is now used for a lot of intersectional kind of perspectives. And so thinking about things like historical discrimination, oppression, injustice and so on. And then particular claims to recognition from groups like different races, genders, people with disabilities and so on. And so now we have, and rightly so, quite a lot of focus on diversity, equity and inclusion as the kind of modern correlate really of that kind of ethics of care. And here you have obviously different perspectives on ways to be inclusive, to recognize diversity and to use equity as a way of addressing historical power differentials and things that have kind of been historically unfair. It's also this sort of situation where sometimes people like me are obliged to just be quiet for a bit and just listen to what people have to say. And people tend to be pretty grateful when that happens. I just want to mention quickly, check out the EDI project on GoGN, which is where we do some of our work around diversity, equity and inclusion. I'll put the slides up when I'm finished speaking, very well. So just on this theme of intersectionality quickly, this is from a book on data feminism from a couple of years ago. And the table is quite handy because it sort of illustrates different perspectives here. And the authors say, these traditional approaches are basically technocratic. They have a false idea of objectivity. And so you think that you can just wash away any bias by improving your algorithm and that kind of thing. Whereas the intersectional approach will emphasize more the importance of the context and the historical forms of oppression and so on. I'd probably say like, I don't like to dichotomize it quite as much as this may be. I think there is a continuity between things like justice and ethics. But then I haven't written a book on data feminism, so maybe I don't know what I'm talking about there. But I like this quote by Mary Midgley, who's a British philosopher. Which basically says, look, ethics is very complicated and it evolves over time. We can't throw everything at a dustbin from what happened before. We have to continue to build. And ultimately, you know, small changes in emphasis in the ways that we think about ethics can really make a big difference. It's very hard to be aware of our own biases in that respect. Just to finish off this stuff around ethics of care, I don't realize it's, you know, probably shorter than anyone would like because there's a lot to say. It's the importance of networks and communities for this kind of stuff. So a lot of the time, and I'm thinking of the work of Illich here as well. We need things that are informal and autonomous. We can't necessarily rely on our institutions to provide all of the context that we need for these kind of ethics of care perspectives. So I've got the Femme Edite Quilt here to illustrate this, which I think is a good example. Both as an artifact and a network. But for the ethics of care stuff to make sense, you need communities, and you need communities that have some sort of relationships, the meaningful between them. Another quick plug for GoGN. GoGN is an example of this kind of community. All of our links are informal. We don't, you know, people from all over the world, part of the network. And I encourage you to go and check out GoGN. So now, quickly, a thought experiment for you. Who's ever done a thought experiment? Well, you're gonna do one now. It's not that tough though. So it sounds unpleasant, and it's maybe a little bit unpleasant. It's called the drowning child. And you can join in on VVox. So I'm gonna outline the situation for you, and then you're gonna vote, right? And I'm gonna customize it for us here in Manchester, because I'm a pro. And imagine tomorrow morning, you're walking here to the conference from your hotel, and in the canal, you see a child who seems to be drowning. And you can wade in there. It's not that deep. So whether you can swim or not, doesn't matter. You can save the child, but you'll be inconvenienced. So maybe your nice clothes are gonna get messed up, or maybe you'll end up being a bit late for one of your talks. Do you have a moral obligation to save that child? Yes, no, or don't know? Play along at home. So just waiting for those results. Okay. It's easy for me to see here. So we've got about, what, 90%? No, hang on, 80% yes. 10% say no. 10% don't know. Okay, that's 20% of you thinking, not sure about that, quite like my shoes or something. So the second question is, now imagine there's a child who's on the other side of the world, and you're never gonna meet them. You're never gonna see them. But you can save their life because you can donate to a charity that will give them the medicine that they need. Do you have the same obligation to that child? Yes, no, or don't know? This is an example from Peter Singer. The ethicist Peter Singer. It's quite famous, and most people who study philosophy will encounter it at some point. So we had 80% saying yes before. Let's see what it comes back as now. Okay, so it's pretty difficult for me to say. So this time we have about half saying yes, and about a quarter saying no, and don't know. So the pattern seems to be that there's less obligation to that child than there is to the one in the canal in Manchester. The question here is, do our moral obligations change based on visibility? Rationally, it's the same example, right? You can save a child. You're inconvenienced, but just a little bit, right? Maybe you can't get a takeout this week or something, but you can save a life. But the thing is psychologically, it does seem to make a difference. Whether we feel obliged, visibility does seem to matter in that way. One thing that's interesting about this is that on the back of this kind of way of thinking, they've built a charity called The Life You Can Save that's done all kinds of interesting stuff which is worth checking out. I like to compare that with something from edtech. This is from around open education and building and sharing OER. And the idea here is that in a way, we have an obligation because we can share education and knowledge and learning with people all around the world and it doesn't really mean convenience at all, right? The infrastructure is there, you put something online, it's out, it's, you know. And I think this is an interesting comparison. How much difference does it make whether someone is in your vicinity and visible to you or on the other side of the world from a moral point of view? You can extend this way of thinking in a different direction as well, which is to say, what about people who are not yet born? Do we have an obligation to them? There are probably gonna be billions and trillions of people to follow us. Do we have an obligation to create the right kind of world for them to flourish and prosper? So instead of a geographical distance, you've got a temporal distance, a different distance in time. There's an idea here called long-termism which is basically the same sort of thing. So anyway, I wanna move on to talking about edtech. I probably need to speed up a little bit. So the first thing to say about edtech is that ethics is a kind of under-researched area in edtech. And overall, while there's lots of work going on that's ethically relevant, there's a lot of stuff which isn't really explored from a research perspective. And there isn't really a professional ethics as such for edtech, the felt framework is probably the closest thing to it. And there are kind of structural issues around edtech, things like privacy, surveillance, autonomy, maybe conformity, as well as all the stuff around diversity, equity, inclusion, and so on. But because of the nature of some of these things, it's difficult to collect data that could be used to reliably inform strategies and policies. Some other stuff you find in the literature is the idea that most educators, most learners, are not really that well informed about what's going on with data and so on, how to manage it, the risk associated with it, and whether informed consent can really be assumed in all these different scenarios. If you look at some examples of frameworks for thinking about ethics, here's one to support ethical decision making around learning analytics. It's quite sort of process driven and quite sort of formal. So first of all, explore the issue, then apply your institutional lens, then think about ethics, then document what you've done. I'm not saying that's bad, I'm not saying it's not practical, but you could do it in a quite superficial way, I think, with this sort of process. Another way that people approach this is to provide you with a set of principles. So here's another example from learning analytics, this is from Slade and Prinsley, where it's more like a kind of set of principles to base what you're doing around. So you don't have to interpret those in your own way and try and apply them. I think in some ways that's better than just having a formal list, because you still have to think and apply it yourself. And sometimes you get these kind of combinations of stuff where you'll get some principles along with some stakeholder groups and different values, and this is kind of a mashup where you have to kind of, okay, we'll put these different things together and see what comes out. And I think that's good because you have to still think about how am I going to put that together? How does it affect all these different stakeholders? And those things are worth doing. Another area, maybe one I'm more familiar with is this idea of almost like specific clusters or constellations around different areas. So there's some really interesting work going on, combining open education approaches with social justice. And I've provided a few references here so you can see how that works in practice. It's definitely ethically informed. I would say it's coming from a kind of ethics of care perspective, but it's using research as a way to fulfill that and make sense of it. So thinking about where we are now. In some ways, like this, you've probably seen this before, this meme where it's, you know, everything is fine, but the room's on fire. Anyone relate to that? Last couple of years, maybe. At least COVID's over, though, right? At least everything's okay economically. We have good political leadership. Boris got Brexit done. What's to worry about? Obviously, COVID was a public health crisis and an institutional and pedagogical crisis, but we don't think of it so much in terms of being an ethical crisis, but it was because all of a sudden, people were expected to take decisions that benefited someone they might never meet, right? Wear a mask because you might save someone's life. They might be six degrees of separation from you. That's the expectation, that's the ask. I think there are also some important aspects around self-care and just kind of relating to yourself and looking after yourself in the face of this kind of unfolding crisis where so much uncertainty and so difficult to know what was going to happen. The whole online pivot, which I'm sure everyone's got their own story, to tell about it, very taxing, very hard, and a lot of demands put upon people and all different stakeholders, all different people involved in the processes. And we're still making sense of all this, I think. One thing I wanted to just quickly mention, I'm not sure if I should speed up a little bit though, it was around the use of algorithms grading A-levels in 2020, where they were just basically saying, okay, you enter a private school, that must be better, so you can have a better grade, even though no one actually did the exam, right? The exams weren't graded. So increasingly, I think the use of AI and algorithmic approaches is a kind of concern. And increasingly, the whole thing is data-driven. And the way machine learning works is more and more data all the time, plugging in more and more and more. There's no limit to that. And increasingly, we're using biometric data as well, filming people in their own home to make sure they're not cheating in an exam and that kind of thing. I do want to draw your attention to this report by the Human Rights Watch, which found that in the big push to move in the online pivot and get everyone learning on VLEs and so on, hundreds of products that were recommended by different governments, turned out to be harvesting data for millions of children without any sort of consent, without any sort of knowledge, completely invisible. That's done, that's gone. No one knows what's happening with that data. And if you were to do it under normal circumstances, it wouldn't be allowed, right? But the question is, what's happening that we're not really aware of with this stuff and big data? With AI, you get the idea of algorithmic bias, which I'm just going to nod towards really. I think I've got time to go into it. But I do want to point to some of the stuff that's happening kind of in response to that. So, the AI for people ethical framework is partly a response to some of these concerns about bias. And what they do is they synthesize a lot of different ethical codes down to four principles, beneficence, doing good, non-malificence, don't do harm, supporting autonomy and justice. And they say there's a new principle that's needed, which is explicability. And the idea with explicability is that you expose what's happening in algorithms. You have transparency and visibility. And this is something that I'm quite interested in from the point of view of sort of open education. Is there a connection between being open and being transparent and visible and so on? And I think also, though, that there's sort of some implications for pedagogy because if you expose how the pedagogical process works, does that have implications for how people learn? Are you going to make it easy for people to hack systems or hack assignments because they know exactly how it's done? You show them, you know, like normally pedagogy works on a little bit of opaqueness, right, at least until afterwards when you say, okay, now I see what was going on in that learning exercise. But if you go down the road of explicability, there's also some more questions around this. So you could say explicable to who because what you might understand as a tech specialist isn't necessarily what a learner can understand about how a machine learning algorithm is working. So the question is, for who? And this bit of research by Marcus and colleagues shows how you can divide this into interpretability, which is sort of human understanding, which you might be able to put in front of a learner and say, this is how the algorithm works. But that might not be the true description, which is they call it fidelity, but it's like if you like the technical thing. And sometimes with machine learning, people don't actually know the technical thing. They can't tell you because it's happening inside a black box or, you know, it's a semi-autonomous process where a software program just runs. Well, I do think there could be a new sort of role emerging in the future, which would be something like a broker who would basically translate between different stakeholder groups what is happening with AI and how it's being used. Another thing I want to just quickly mention was like the idea of the metaverse and you'll have seen maybe that there's a bit of a push to go back to recreating virtual campuses, right? And it's partly a response to the pandemic, I guess. You could see it is an extremely cynical thing as well. I think partly, you know, making things into a bad computer game doesn't necessarily help anyone. But there's also a lot of companies vying for and territory in this space and the data that they can harvest. And potentially also a sort of digital divide thing because the VR stuff is not cheap to get into. If you want to get the cheap one, that's Oculus, right? Whereas it's now called meta, or basically Facebook, who've got to say the least a very sketchy history when it comes to data harvesting and what they do with it. So I think that's something to be aware of and be interested to see if anyone feels like they're getting pushed in that direction. Another thing I want to say quickly was just about the sort of language around ethics that we hear. And the idea that a lot of the time there's something sort of, I'm calling it ethics washing here, but it's basically being used as PR and presentation, but without any real kind of commitment to doing anything. This comes in different flavors and you may have encountered this, you know, you may not, but we certainly seen quite a lot of it in open education, with open washing and commercial companies branding themselves as open, for instance, and there's all kinds of ways you could brand yourself as an ethical organization or ethical practitioner. And sometimes it's hard to tell, you know, who's real there. And on a related note, I think this is quite interesting piece of research published in the summer where someone did survey people in 14 countries and almost 7,000 people in the sample. And they found that three quarters of people said the kind of DEI policies that their organization had were just lip service. They didn't mean anything, they were just branding and PR. Three quarters, that's quite a lot. The similar number also said that they thought their employers COVID related policies were just, you know, lip service and PR again. So I think this is also, you know, we live in a kind of hostile information environment a lot of the time and that's quite important to take account of. Again, it's not really transparent. It's not really visible what's going on. So just to wrap up kind of this little bit, right, is kind of, I guess, my takeaway points just thinking about these different perspectives on ethics and kind of how to orient yourself maybe. It all starts with your own experiences and your own kind of ethical subjectivity. And it's also important to know that ethics is about communities, not individuals. Communities have ethics. Individuals might have morality or a code of practice or something like that. But ethics is a feature of groups and communities. Dialogue is really important. Sharing perspectives is really important. No one person has all the answers. So you have to have those dialogues in place. I've tried to show how there's different perspectives. Maybe I've tried to do too much in the time available, but I think it's really important to know that there's quite a breadth of stuff out there. And we need to have some sort of frameworks to help us reflect and to not get trapped into thinking it's just another bit of bureaucracy. I encourage you to think about visibility and think about your influence. And maybe it goes further than you think necessarily. And thinking about care, take advantage of opportunities to care for people. Allow yourself to be cared for and take care of yourself as well. I think all those things have been quite tough in the last couple of years. I also encourage you, I guess, to think about alternative infrastructures, non-formal networks, wider communities, all these things are really important resources. So to go back to where we started with the felt framework. I kind of built the presentation just of what I was thinking and what I would kind of say about these things. But it actually maps quite well onto what other people provided in the crowdsourcing around these things. So things like awareness, understand your context, understand your orientation, understand what ethics means to you. Professionalism, make use of those codes, make use of those guidelines that are there. They embody hundreds of years of philosophical reflection around these things. Community is important, care is important. And having the right values and interrogating your values and trying to improve them and improve them discursively and collaboratively. So basically, felt is not a finished concern. It's an ongoing thing. And I would say consider this the first iteration. Contribute to the next one. Have your voice heard. Have it integrated. Your experiences matter. And it's important to capture that knowledge into the framework for other people to use. My understanding is that Ulta gonna encourage people to map to other professional networks, but also to align CMOT activities to the framework. And we saw some good examples yesterday in the awards of how people can apply it. But what I would just say to you is, keep ethics is a central concern of what you do. It's really important. And it's easy to kind of put it to one side. And not foreground it. So I just wanted to leave you on that note with this quote from the feminist writer, Rebecca Solnit. And I think this is something that captures the kind of both subjective and objective nature of ethics. Stars we are given the constellations we make. That is to say, stars exist in the cosmos. The constellations are the imaginary lines we draw between them. The readings we give the sky and the stories we tell. Thank you. Thank you very much, Rob. We don't have time for questions, but I just wanted to maybe make a few reflections and just say, thanks very much. I think that's been a fascinating talk. And I think you've covered so much ground. It's certainly one I'm gonna go back and revisit. But I think one of the key things that struck with me when you were going through that ethical checklist for research was when we had the early discussions, I remember Bella mentioning, let's have a look at the sort of ethical research guidance that we have in our institutions. We'd also started off with that learning analytics checklist. I think we very quickly thought we don't want to make our consideration around ethics and learning technology something which is reduced to a checklist because we felt that was very reductive. And I think what will stay with me is this thing of that actually we need to be thinking about ethics all the time. And it's making what is invisible, visible. And I think the key thing for us, as Rob has said, this is the first iteration of the framework. We've always seen that it's gonna be something which is gonna evolve and develop and how it's going to be shaped by practice and application. And the old community is critical to sort of how that is gonna be shaped in the future. So it's thinking about that. And we've talked a lot in this conference about how we have an opportunity to shape the narrative of the future of learning and teaching. And part of that shaping is scaffolded by that ethical framework. So how do we, as practitioners, bring that ethical voice, that ethical perspective into how we shape the future of learning and teaching? So thank you, Rob, for making us think about that and setting it in that whole framework of how the sort of evolution of ethics and making sure that we can bring those perspectives to our respective tables and our institutions and organizations. So thank you very much. Thank you.