 Okay, good morning everyone. Thank you so much for joining us at today's winter summit. My name is Kerry Piny. I'm interim chief executive of the Association for Learning Technology. And it's a pleasure to see you all here this morning, ready for this fantastic day of ethical conversations and artificial intelligence, which as we all know is another extremely hot topic currently in our field. And it's a delight to have you all here for this final conference of the year. We've got lots more events coming up as ever with our various groups, member groups and special interest groups. So do keep your eyes peeled for upcoming events as we end 2023. It's a fantastic to see you all here. I'm so excited to have you. And we have Natalie Lafferty and Sharon Flynn, who are our speakers and our co-chairs for today. So you'll see Natalie and Sharon throughout the day. They'll also be facilitating the Q&A throughout today. This session is powered and sponsored by Kaltura. So we're using the Kaltura platform today. And it's a huge thank you to them. Without them, this session wouldn't be running. And without the support of our many vendors and business partners, a lot of the events that the old runs just couldn't happen. So a huge thank you to them. And you'll hear shortly from Sergai, who will share a little bit about Kaltura and what they're doing. In a moment. So thank you so much to Kaltura. Just a reminder of our program for today. So we've got a nice full program, lots of interesting sessions. And every session will be followed by an opportunity for Q&A. So please do share your thoughts and questions and comments throughout the day during those sessions. We have a short break or a longer break between all of the sessions. And that should hopefully give you time as well to join in some chats and various other activities that we have planned throughout the day. If you haven't found it already, if you do require closed captions, there are no closed captions within the platform itself. But please use Google Chrome, which is a recommended browser. And there is the option to turn live captions on in Chrome as well. If you haven't found your way around the platform already, let me just show you around very briefly. So on the main page, if you ever need to get back to it, you can click on the alt icon in the top left hand corner. But from that agenda, you will see the title of the session, the time, the date, the speaker and any other information about the session and it's abstract as well on that main agenda page. When you're in a session as you are now, if you haven't spotted everything already, depending on the settings at the moment, you may not see the options to use video microphone or raise your hand, but there is those options. We also have the options to use the chat and the Q&A over on the live stage. So if you haven't spotted it already, click over on the right hand side, live stage, and I can see lots of people in there already already saying hello to everybody. And you can use that throughout the day to just chat with one another, add comments, add thoughts and we will share them then with the speakers. You'll also see in that area as well a separate tab for Q&A. So this is really a really great spot, really highlights for us if you've got a question that you wanna raise, you can pop that in there and we can answer those for you. The other thing you can do is connect with other delegates. Now you can choose not to and you can choose to disable private chat if you prefer not to be contacted by others, but if you're open to being connected with other people, other delegates, then please do use the connect with other attendees. So you can search for other attendees. If they don't appear in the list, that means that they're not open to being privately messaged, but then you can message one another there. We have the session chat, but we also have the overall program chat. So when you're on that agenda page, you'll also see the chat in the live stage there, which is kind of the chat for the whole of the conference rather than just for each individual session. And I'm really pleased to introduce our co-chairs for today's conference, Natalie Lafty and Sharon Flynn. Natalie is the head of the Center of Technology and Innovation in Learning at the University of Dundee. And Sharon is currently on succumbent as the National Coordinator for the Entrutor Program for the Technological Higher Education Association. And both Natalie and Sharon are old trustees and Natalie and Sharon were both instrumental in setting up the early consultation and also the early work on the framework of ethical learning technology, which today's conference is focused around. I'm just gonna invite Natalie and Sharon onto the stage to join us for today's session. So welcome, Natalie and Sharon. Thank you so much for co-chairing today. We look forward to seeing you throughout the day and facilitating those Q&A sections. So I'm gonna hand over to Natalie and Sharon to introduce today's event and also to introduce our speakers. Thank you very much, Kerry, and good morning, everyone. It's a real pleasure and privilege to be here this morning with Sharon, co-chairing the Winter Summit today. And it's really helpful, I think, that actually the focus is on AI and ethics. And if we think about the old ethical framework, maybe we can just share that on the slide. Essentially, we started developing this during COVID in 2020 and it was really developed through a process of consultation and partnership with old members. So a real community effort, I think, and I think really, really helpful. And during COVID, I think it sort of helped us to just maybe highlight some of those digital inequalities that are emerging, some of the issues and challenges around accessibility. And we fast forward from one massive disruptive event to two years later and the advent of chat GPT-3, which sort of seems to turn the educational world upside down even more than COVID did, grabbing all the headlines. And I think that work around developing the ethical framework is probably more important than ever as we consider how we're going to apply generative AI and AI more broadly in education. So really excited by the line of the speakers and sessions that we've got today to really, I think, sort of chew over some of these thorny issues around how we adopt AI. And with that, I'm delighted to pass over to Sharon. Thanks very much, Natalie. Do you know what? It's absolutely lovely to be here. I was looking at the chat as Kerry was talking and great to see so many familiar names, people that I've known for so long. And I think, while the rise of generative artificial intelligence has certainly consumed us over the last year, the topic of today's conference is not something that is standalone. As Natalie said, this has been emerging over some time and we've had various conversations over the years in and out about different aspects where we need to apply that ethical lens. We've talked about data protection. We've talked about looking after our students. We've talked about GDPR and it's not something new, necessarily, it's just applying the same lens to the situation that we find ourselves in over the last year, really. And I suppose reflecting back this time last year, there was this sense of something coming, some panic. And I think we've kind of gone through various stages, haven't we, over the last 12 months of that panic, that fear, maybe coming around to acceptance, resignation, but then also something around hope and looking at opportunity. So it's been a very interesting journey, I think over the last 12 months. And I think we've all been looking for guidance and we've all been looking for structure and hopefully today we're gonna hear about some of the fantastic work that's been happening from some people who've been really looking at this and I'm really looking forward. I'm very excited about today's agenda. So I'm gonna stop speaking now. And what I'm gonna do is I'm going to invite Sergei Tugaranov to the stage from Kaltura. And thank you so much for the sponsorship of today's event. I have to say I'm also looking forward to seeing how the platform works today. So over to you, Sergei. Thank you, Sharon. And welcome everyone on behalf of Kaltura to today's event. It's a very wintery summit. I know people are joining in with wintery clothes on and rain and gray skies. On Kaltura's side, just like everywhere else, AI is a hot topic. And when we had the opportunity to support this winter summit with the Association for Learning Technology, I jumped at the opportunity and this sort of next four, five minutes from my side is just a short introduction of how we as a technology provider to higher education look at artificial intelligence. And in some ways, we are maybe on the easier side of the conversation where we're not really in that field of AI, how it can be used for assessment. And so we're not in that sort of difficult part of higher education. And it's a combination with artificial intelligence. For us, there's kind of four key pillars that we look at when it comes to AI. And they are video analysis and metadata enrichment, content repurposing, assistive AI for workflows and content creation. So we're very much in that position where students are in a position to benefit from AI and it's enrichment, it's content repurposing in media rather than use AI in an exam or anything like that. So we're not in that field. In the first pillar, it's all about, in terms of automatically, for example, identifying chapters in a lecture and providing a title and summary for each chapter. In the content repurposing, which I know certain universities are already trialing out, it's maybe one of my favorite parts of artificial intelligence. And that's where, for example, we have an hour-long lecture and AI should be able to boil that down to a summary of, say, three, five or 10 minutes, whatever is relevant to that particular case. The third pillar is all about the way that AI can support educators in the moment. So for example, if you're running an online or hybrid class as part of your online or hybrid programs, how can AI be an assistant for you in that moment? How can it capitalize on moments of significant drop-off and engagement or moments where people are, especially if you're managing a large, medium to large group of students, how can an AI assistant tell you when the chat is sort of really running right and people are asking a lot of questions. Maybe now is the right time to pause the session and invite people over for questions and kind of review the material. So certainly, we're working quite hard on AI and its capabilities for assisting educators in the moment, not just asynchronously in terms of analysis and repurposing. And finally, just to wrap it up, I think we all have been excited by chat GPT and mid-journey and Dali and kind of those generative AI capabilities around text and images. But this shouldn't stop there because in rich media formats such as video, we can continue to innovate and continue to create, generate video content through artificial intelligence, whether that is for educators to create avatars of themselves and to clone their voice and maybe scale their education capabilities in that way, or if it's for other applications that we might not even have discovered or identified. So in any case, I think if any of you are curious to learn more about how a media technology, as a video technology company in higher education is approaching AI, of course, feel free to let me know, reach out. But otherwise, I hope that the rest of the day is interesting, it's engaging, it might be controversial, it might be challenging, and I'm certainly excited to be there with you throughout the day and get to know more about how these difficult questions, especially through the lens of ethics in AI, are being tackled. So thank you again for attending. I wish everyone a lovely rest of the day. Stay warm and speak soon. Thank you so much, Sergey. I think you've certainly set the context there and raised a couple of issues already. So, yeah, we're really looking forward to today's events. And now, I have to say it's with great excitement that I have the honour of introducing Helen Betham, our first keynote today. And I see Helen joining the stage. Helen is embarrassed to realise that she's been working in educational technology for more than a quarter of a century. I don't know why she's embarrassed, because I've been working about the same amount of time there as well. So there's two of us in it, Helen, really. In that time, she has edited a couple of standard texts. She's written dozens of reports on digital education, many of them for GISC. She has consulted for global universities and a couple of governments and has researched issues from surveillance pedagogy to time and space in the digital curriculum. She recently spoke at a UN panel on women in technology. Her digital capabilities framework is well known in higher education and beyond. However, on the subject of generative AI, Helen finds herself, as she describes it, shouting from the margins. She will have more to say about where she is speaking from in the course for talk, as she encourages us to approach AI ethics in a way that is both relational and positional. And so with a title, provocative title, Who's Ethics? Who's AI? A relational approach to the generative AI challenge. I welcome Helen Beatham. Over to you, Helen. Thank you so much, Sharon. Can I check? You can hear me okay. Great. Well, it's daunting and amazing to be here today. And I think as I've been introduced as challenging, I will do my best to be challenging, but I also think it's really important and ethical to take responsibility, not just to shout from the sidelines. So hopefully I will bring my ethical observations to a conclusion with some hopeful points. I'd like to start with the wonderful and important Russell Group principles on the use of AI in education that many of us have been working to. And I just want to appreciate that they're there. And I think it's really important that we all have these guidelines. But I also think that they are typical. They're not on their own in producing a certain confusion about who the ethical actors are in relation to particularly generative AI, which, as we've heard, is the excitement of the moment. And I pick out this particular phrase from it, which is that while ethics codes exist, they may not be embedded within all generative AI tools and their incorporation or otherwise it may not be something users can easily verify. And I call this Schrodinger's ethics. Some of you will be familiar with Schrodinger's cat, the idea that there is uncertainty in the universe that a cat is put in a box with a radioactive isotope which either does or doesn't decay, killing or not killing the cat. I should say that no cats were harmed at all in the thought experiment that was Schrodinger's cat. My own cat, I've just had to evict from the room before he sees any upsetting images. So the cat is always not alive in the box. We don't know if it's in there. And I think this is quite a problematic way to think about ethics in relation to technology. First, the idea that ethics is there or not there. It's a code. It's perhaps part of the code if it's there. But also, if it is in the box, we can't really know what sort of cat it is. Do we like this cat? Does it meet our ethical requirements? What color is it? And so on. I mean, the cat in the box clearly belongs to the generative AI company that owns the box, the black box model it's embedded in. And I was going to do you a bingo card to see who could match up the latest language models on the right here with the companies that own them, but it became technically too complicated. But I'm sure if you want to, you can use the chat window to match them up. And I've also put in some outliers there to throw the game a little bit. But the point I want to make here is that the generative AI tools are clearly being used as ethical actors in the Russell Group principles. And I'm asking, well, what kind of ethical actors can these tools be? What can we expect from the companies, for example, behind the tools, that have converged very rapidly, very rapidly on four or five big companies that might be exactly those we would expect to be making profit from a new technology? Well, the first, most prominent of these companies, I'm sure you've been following some of their travails in the news recently, they eliminated their entire AI ethics team at the same point as they deepened their partnership with the supposedly not-for-profit that runs their AI generative model. The other main player famously sacked ethics advisors, Timnit Gebrew and Margaret Mitchell when they raised issues of bias and safety in their models. The third player, XAI, established, and I quote, to understand the true nature of the universe, unquote, by scraping X for content. This is the X from whom the entire ethics team was also sacked when it stopped being Twitter. So even if the idea of embedding an ethical cat was a good one, the cat in this case doesn't look in great ethical health. So how about the other people who are important in these principles, the users, us, all of us, teachers, learners, individuals? I think it's really important as an educator to me that the ethics we have focus on agency and on AI literacy. AI literacy is a term that's used a lot in some of the best principles that I've seen. And of course, of course, I approve of this. I think capability is what we're about as educators. As the principles say, we must understand the opportunities and issues. We must apply what we've learned. We must be agents. We must be moral and ethical agents. But to have ethical agency, you have to have certain other resources. You have to have information. You have to have opportunities to reflect, perhaps with your colleagues. You have to have time to assimilate rapid changes that may come to your practice. And you have to have real choice. You have to be empowered. And it seems to me that that choice is problematic when these tools are so rapidly being embedded into our core platforms, VLEs, grading environments, search engines, productivity tools, and all the intermediate apps and APIs that are coming along to take advantage of these new features. I find the EU guidelines, and this is not the AI Act that's just come out. This is an earlier set of guidelines for educators. But I find them helpful in that they're a bit more circumspect about that agency. They see ethical agency as, can we ask questions? Can we have a dialogue with these systems? Or with the responsible public bodies? Though, of course, there's a problem there because at the moment, the only really responsible public body that's taken a stand on this is the EU itself. And maybe in the discussion, we can talk about some of the limitations of the standards that it's taken. Now, happily, I fell on the framework for ethical learning technology, which I love. And I found there a bit of an advance on users as ethical agents and technologies as having a moral code like a hidden cat. And I found a much more relational approach. It's not about a fixed code of practice. It's about thinking. It's about considering. It's about having conversations with your colleagues. It's about looking beyond your immediate context as a user to think about other people, perhaps non-users, perhaps producers, to recognize broader responsibilities than simply those to me as a user in an immediate context. And this is what I think of as relational ethics. You can look, there are many different definitions. I found the one from the Center for Techno-Moral Futures. Quite helpful. There are plenty of others. What they focus on is putting relationships at the core, understanding technology as reframing relationships, as reframing our relationships with each other, essentially, but also reframing our relationships with work and with knowledge. We need to understand the context and the ecosystem for that. And we need to ask the right questions. We need to understand how we are interwoven with these relationships and interwoven in new ways through the technology. This is asking questions rather than ticking boxes. And the checklist of ethical harms can seem overwhelming, can't it? So I think it's quite useful to think relationally because actually it starts to link up some of these different ethical considerations that otherwise look very separate, very unconnected, just like a huge, great long list to worry about. So bias, privacy, the list goes on. The other thing that's important in relational ethics is positionality. And I take this from both feminist ethics but also anti-racist and decolonizing ethics that there is no ethics from nowhere. And I think this is particularly important in relation to generative AI. What the synthetic models propose is always a position from nowhere, a summary from nowhere, an answer from nowhere. No one is responsible. And a relational ethics refuses that. It says there's always somebody speaking from somewhere. And in relation to that, I think I should say a little bit more about my own position in relation to this and tell a little story about why I think I may be here. So when I was 17, I turned down a place at Oxford to read philosophy because I wanted to go to Sussex University to read artificial intelligence and cognitive science, which as a 17-year-old, a very geeky and strange 17-year-old, I decided that this was how I was going to understand how this incredible thing I walked around with called in mind worked and what it was doing in the world. By 19, cognitive science AI and I had a party company and some of those reasons don't reflect particularly well on me as a seeker of truth and that's for the bar afterwards. But the bit that I think was important to me and has been for the rest of my career was that I came to an understanding that the claims being made about minds and intelligence from the AI community did not, for me, stack up. They didn't relate to my life with a whole host of discovering different kinds of minds, different kinds of communities, different kinds of bodies living as I was in the very exciting community of 1980s Brighton. It didn't make sense of my world for me. And I've not talked very much about AI in the 40 years or so since because it hasn't seemed to me to have made very strong claims on our attention and funding and education. However, as we know, that's all changed. And it's come back to haunted. And so I have been speaking my mind a little bit on this subject. I'm also really fortunate to have connections still into the world of academic AI. I spoke to three professors of AI in preparing for this talk who were also very strange teenagers, goes with the territory and also agree with me, although we disagree on many things that AI is massively overhyped and that's very unhelpful to everybody. But having said all of that, having said that I've made myself somewhat marginal, I am immensely privileged to be speaking from that margin. I'm white. I'm well educated despite my falling out with university and my teens. The boundaries I currently put myself perhaps on the wrong side of are not going to mean my credit is stopped, or members of my family are locked up, or that I'm going to be denied treatment, or I'm not going to be able to cross a particular border, all of which are things that happen to people because of the categories that AI imposes on the world. So I think that question about positionality and power opens out into a question. Whose AI is this? And I apologize that my slides in rendering them as a PDF seem to have crossed over the image with the text and that must be really frustrating for people trying to read them. I will try and get a more readable version online as soon as I've finished speaking. So what I want to say about this is that there is no definition of artificial intelligence that doesn't also define what intelligence is. I went to a recent article in No Less Than the Journal of General Artificial Intelligence to look at how that community is defining artificial intelligence because it's a very slippery term. And I found here this quote that every working definition of AI is an abstraction. It describes the world from a particular point of view. This abstraction is made from an engineering perspective. It's to guide the construction of a system that will show some outcomes that are similar to the human mind in that aspect that we've defined it while neglecting other aspects of the human mind as irrelevant. Now, obviously there is no the human mind. There are only human body minds in particular cultures and societies and systems of thinking and the abstractions serve a particular purpose. So AI is, on its own terms, a project. It's not a fact. It's not a definition. It's not even an achievement. It's a project of defining some kinds of intelligence as mattering, some kinds of intelligence as being reproducible, as being valuable, and other kinds and therefore other kinds of people as not so important. And this has gone back right to the beginning. So in 1956, Simon and Newell, who were the developers of the logic theorist program, were among those who met at Dartmouth conference in the US to launch this term artificial intelligence on the world. And they were very clear that what they were doing was trying to solve the hard problems. Marvin Minsky said intelligence means solving hard problems. They were chess players. They were mathematicians. Hard problems to them meant the problems that had got them absolutely to the top of their profession in the country, the US, that was at that point absolutely at the top of the world, thanks to its technology. It was also just like the UK, a country in which intelligence testing had become really significant in schools because for the first time in both those countries, schools were non-segregated. They were not racially segregated in the south of the US. They were not segregated in the sense that the universal education came in the UK. So it became very important for certain people to distinguish within the school system who had this thing called intelligence in order to create new distinctions, new divisions. Now as it turned out, this instinct that we can start with the hard problems and the simple ones will just fall away has not been true at all. So the hard problems of playing chess, solving mathematical theorems, they were solved relatively fast in the world of AI. They were found in the world without falling over and using natural language, the simple problems that simple people can all do, it turns out are very much harder and still fairly glitchy. But the use of intelligence as a term to divide people up, to assign them to different categories, especially different categories of work, this goes all the way back to, back through intelligence testing with its association with eugenics, all the way back to Babbage, as people know as the father of computing, computing as many, many fathers, it turns out, who invented the difference engine, the calculating engine, the analytical engine. But in fact, in his time, he wasn't particularly celebrated for that rather failed project, he was celebrated for his work on making the factory and the plantation system more efficient. He was interested in the division of labour, how mechanical and mental operations could be divided up into smaller and smaller parts, which allows us to precisely know what skill is needed to do that operation. So intelligence could be taken out of the mind and body and practice of the weaver and put into a punch card system and given to the factory overseer or it could be given to the plantation manager. That was what intelligence was able to do. And in exactly the same way, he could see that the work of calculating, which was needed to be done for things like nautical almanacs, that could be broken down into discrete operations and each discrete operation could be mechanically done by a part of the difference engine. In fact, his plan was to put out of work a large number of peace workers, many of them women and children, who did those basic mathematical calculations and they were called computers. Those people were actually called computers who would be put out of work. Right up to the present day, we can see that researchers, commentators like Meredith Whitaker, Simone Brown, Edward Onwesco Jr., some of you may follow online, I'm from the University of Bwala Mwini of the Algorithmic Justice League, Dan Quilter, many, many more. They are writing about the consequences of AI for people who fall into the wrong categories, whether it's facial recognition in policing, which this comes from the Gender Shades project, or whether it's AI surveillance of borders and border crossings and conflict zones. So, talking of conflicts, the DARPA, which is part of the US military, has been very critical to the funding of AI. I took this graph of AI funding. You can see the kind of rise and fall of optimism and less or hype, if you like to call it that, through the funding cycles. They funded AI very intensively since the 60s, especially for developing autonomous systems. That's where driverless cars came from and for rapid decision making, and that was typically for battlefield situations. Now, when I was putting these slides together and came across this rather celebratory news about DARPA funding, a new round of funding for AI, it actually included an image that I've chosen not to include at this moment that I'm delivering this talk because I don't think we really want to be confronted with images of how these categories are used in conflict today. But I do think it's helpful to remember that the Project of AI has always been one of projecting global power and of categorizing people, particularly in ways that give them access or not access to resources, including resources of safety and security. Now, it might seem a bit far-fetched in an education conference for me to be talking about military funding of AI. But here is, you know, I hesitate to call it smoking gun because obviously that's a horribly militaristic metaphor. But my research into the Safer AI Summit a few weeks ago, which some of you may have read about on my blog, I found myself in wondering why the task of unlocking the future of education had been given by the Department for Education to an organization called Faculty AI. Some of you may remember Faculty AI from when one Dominic Cummings was running the country and was closely associated with it. And at the end of this blog post about the insights that had been gained from a hackathon run by the DFE and soon to be rolled out across schools, across the country, was a link inviting education leaders, which I immodestly decided I was to connect with faculty about our AI strategy for our school. And I clicked and I went to straight, directly to this page where I was invited to think about how AI could help me with counterterrorism, law enforcement and enforcement of borders. So, you know, you can take that as irrelevant or you can take that as an interesting way that AI has become a project that crosses very many sectors in very many interesting ways. Now, it's really important, and I have been failing to do that so far, to focus on generative AI as a very specific kind of statistical and computational tool. It's very, very different from the computational methods being used in the 1950s. In fact, almost in opposition to it, the kind of machine learning approach. I don't want to get too far into the weeds with this. You can if you want to. But for quite important reasons, I think generative, artificial and intelligence are all very problematic words. Obviously, I'm using them because language is what we use and it's available. But I prefer the term synthetic media and I take that from people like Emily Bender who uses that term. And how I would define this is statistical modelling and re-synthesis of language, images, music, video data and all other digital records and communications and cultural means. Obviously, that is often, and we heard just from Cultura how useful, for example, live transcription can be. It's really useful to re-synthesise forms of human communication for human beings to use. There's no problem with doing that. I think the problem comes when, as Naomi Klein says, the wealthiest companies in history unilaterally seize on this capacity to gather up the sum total of human knowledge and begin to release it to us in proprietary products and proprietary products that haven't always been entirely tested on what users want and need from them. Now, I said it was a relational approach I was going to take and this magic box, this magic AI synthesis, in fact, at Root is a way of reorganising certain kinds of work. In the first case, the training data, what's called the training data, which essentially is the model. It's what it's composed of via lots of levels of processing. The training data is human, in this case text. I'm more familiar with language models. I'm sure others of you are familiar with how images have been used and definitely video is on its way. Data is very much in the mix. Coding as well. So that is human work. It doesn't just turn a tap on and data is there. This is a training process and that involves incredibly skilled human engineers continually adjusting what are called parameters to close in on a version of the attention the model pays to its underlying data that will produce useful outcomes. It's more of an art than a science and again, we can get into the weeds with that if you want to, but there's no doubt that it's an extraordinary, skillful and extremely highly paid bit of human work. And then we get to the bit a little bit more hidden. And I wonder how many of you are aware of just how much human refinement goes on in what is often called the data engine. So after this initial training there can be, in the case of, for example, of the Lama model, over a million kinds of human data were entered at this stage in the process. So that's over a million times that a human being sat down and looked at some of this data and made a judgment about it and that judgment was turned into data for training and that's a smaller model. This is hidden behind immense secrecy. We're not supposed to see it just like the mechanical Turk which is what ironically one of the data annotating, one of the biggest data annotating organizations is called. Most of these data workers are in the global south where they're paid I think at the moment that averages between one and three dollars an hour. They're very stressful and precarious work at the moment workers in Nepal and Kenya are suing for the trauma that they have suffered in having to identify very harmful and upsetting images. So this is the kind of hidden black box of human labor and it doesn't just happen once it goes on and on. So the diagram that this, the open source development diagram that this was taken from is being retrained every week by some kind of reinforcement and retraining. There's all of us, so we're all putting our own data in in terms of prompts. This is something we want to know. This is a piece of text I want you to work on and these prompts unless we've bought or our institution has bought for us some kind of enterprise system are also being scraped and used for data. So I think the point I'm making here is that this is a way of reframing different kinds of work, evaluating it differently really it's only the model engineers who are being highly paid for this work. The people at the top we wrote the original data and the people who are putting the prompts in don't get paid at all and the people in the middle well they're hidden away. We really don't want to think about them because it affects the magic, you know, the magic that is supposed to be happening. Now I think it's really important that we think when we talk to our students about how they might work with and alongside AI we think about what this means because it's not the case that all of our students are going to be relating to these models as developers, far from it and it's not even the case they're all going to be related to them as empowered users who go to them whenever they feel like they need some support. Many of them are going to be working inside the data engine increasingly what counts as creative work is being fed into the data engine. So just in the last week from Rest of World Org, which is a great place to track what's happening in this middle layer, you know I picked up that Silicon Valley's AI developers are now hiring poets to write poems in their native languages that can become part of the training data so that when people say, hey, write me a poem, write me a haiku, you know, there's lots more examples that are properly labeled that can support that. Huge numbers of students who are already employed the International Labor Organization says that actually far more of the people employed in this part of the data labeling industry are under 30 than over 30 so we can expect that many of our students may well find themselves in part of this industry and I think that's kind of important when we think about talking to our students about working alongside AI which bit of AI do we mean they might be working alongside. So let's also think about the rest of work, you know, it's not the case that all of our students by any means are going to be working that closely with the models but how do these models reframe the rest of work and this is how our students are mainly encountering him at the moment to be more productive. Now, I'm not going to identify the screenshots I took but if you put in human writing or human generative text thousands of hits will come back to you, aimed at students to help them quote make your writing human, humanize your chat GP text so what they're being offered here and I'm not going to make any comment on whether this would work or not in relation to plagiarism detection, we can talk about that as well if you like they're being offered to copy their AI text to paste it into something ironically called right human and then click the button and magically the hundred percent detected as AI will go down to zero and they can submit that and obviously for students the offer here is productivity, you can gain time you can gain time from the onerous tasks of reading and writing and expressing yourself that you've been given but I question whether in learning and also in other things we do in teaching and research whether productivity really is the most important thing for us all who benefits from this productivity who benefits from it in the workplace at whose expense and at the expense of what other things that might be important in in reading, writing producing texts and images producing data now I'm going to go quite quickly through just a couple of things because I want to leave lots of time for discussion but I'll go quite quickly through some of the things that I think are on this checklist about how relationships are being reframed it's not just that there are inequalities embedded into the model itself but there are inequalities in how we access it I found some data on the Institute of Student Employers website it wasn't done by them but it was certainly being discussed by them and this was from it was a survey of 2,000 employers in background work into the different models the premium and the non-premium versions and this survey had discovered that on verbal reasoning tests, classically for recruits, graduate applicants, people who were using chat GPT-4 GPT-4 rather to prepare their texts, also their CVs and also their personal statements did much better than people who were not using the paid for version and also discovered that the paid for version was really largely used by applicants who came from very wealthy households and the conclusion that the Institute of Student Employers came to this is it's going to set our social mobility efforts back years so what can we do about it well one of the things we can do is to make sure that we prioritise tasks in our recruitment process that can't be supported by AI we prioritise teamwork we prioritise live interviews producing tasks that might be done face-to-face and it seems to me that if employers have come to that conclusion that using these tools is not really conducive to equity in recruitment then obviously for our students it's not conducive to equity for us to encourage them to spend a lot of time engaging with these models particularly when if they get to the workplace and through these recruitment processes that are likely to try and exclude AI from their from their application they will then if there is AI being used it will be retrained on whatever system whatever model and whatever applications that workplace wants for them so from an equity point of view alone I think it's really important we're thinking about how students, how we encourage students to sidestep, to bypass, to be in situations where these models are not available to them there's obviously as we all know bias encoded in the training data that these models are using I'm more familiar with language models but visuals are very powerful and these two visuals I'm going to show you next I like because they don't identify anybody hopefully they don't upset anybody but they're also visually really powerful so no less a company than Bloomberg that these big companies care quite a lot about how AI is impacting on their future recruits and their own equity processes they train, they put thousands of they ask mid-journey to generate thousands of images from the names of occupations job titles and what they found was that the skin color of the people in those generated images became darker the lower the pay of the occupation that they had put in and I think it's a really visual way of showing that without having to identify anybody the results were similar for gender obviously I know there are real problems with gender recognition with AI I'm not commenting I don't know what processes Bloomberg was using to do this but I'm simply saying that occupations could be coded very clearly by a conventional stereotyped view of gender so these are deeply problematic also there's lots of new evidence about environmental impact that generating an image rather than going to find one through an open source which is what I've done for all the images in this presentation I've gone to Wikimedia or I've credited photographers who've taken them if you want to generate your own image that takes as much energy as a full phone charge if you want to do an inference based search using one of the AI and generated search front ends that takes the juries out but between 4 to 10 times as much power as if you just used a conventional search engine as far as we still have access to them so there's environmental issues as well and I think the last problem that I just want to raise here not so much to do with the kind of general concerns we might have about the knowledge ecology which are many and as Cori Doctorow says the general and I'll use the term and crapification isn't a concern for all of us but specifically as colleges and universities you have a concern for knowledge what is the capacity to generate to synthesize going to do to our practices of learning, teaching and research this is a really important question for us to grapple with I'm grateful to Paul Prinsloo's recent keynote on some similar issues to this one for drawing to my attention the work of Luke Munn who looks to the perspective of indigenous Maori ethics and asks you know he points out that the power to decide what the truth of the world is now resides very significantly in these systems and the companies that control them and he asks what's going to be left for higher education when these models become de facto the arbiters of tricks now I think there's quite a lot left and I don't think they are becoming those de facto arbiters but I think it's a really really important question to ask so what do I think we can do about it as I said relational ethics ask questions it doesn't provide answers but I'm going to say two things about this and one is an approach that I think is really unhelpful and that is to allow ourselves to talk about the skills humans need to work alongside these other intelligences why do I think that's problematic first of all because it concedes agency to what are basically probabilistic systems they are tools they're very complex tools they're reframing our work and our lives in complex ways but they don't have agency they're not co-workers I think talking about the skills humans need as in any term that says human the human the human mind what it's really doing is creating new divisions of labour among people it looks like it's creating a division of labour among between people and technologies but actually what it's doing as I showed in my synthesis diagram it's recreating it's reframing relationships among people through hierarchies and categories and divisions of what intelligence is what intelligence matters whose intelligence matters it has a universal approach so it says that we know what human skills are and what is needed in relation to technology and I think as I've tried to indicate with the changes that are happening in recruitment invest in a particular vision of the future what technology can't do today which has no future it's changing every day if we define what a graduate needs to be able to do it's the inverse of what generative AI claims it can do we are on a hiding to nothing we have to find some better ground for critique and for AI literacy so I want to suggest that we rather than having a checklist of those skills that we think humans uniquely can do which is deeply problematic we think about how we ask questions with and alongside our students and of course please these are some questions that I thought of you will think of much better ones but questions that animate me and that I'm also a teacher I teach at two this year three different universities I also have students who are grappling with these issues these are some of the questions that you know gently as I'm talking to them about their writing practice I might begin to ask you know how did you come up with this text how do you think these outputs that you're using and supporting are generated who is behind that, what's going on what risks might there be to your creative and intellectual life but also to other people's in different scenarios of widespread use of this kind of tool what kind of work do you want to do how do you want it to be valued why do you want it to be valued why might it not be, what are the risks and alongside a literature of questioning I just want to come back to right where I started which is how are we able to be ethical actors if we don't have a context to make real choices the EU Act that was just passed although it's going to be two years before it really comes into effect categorizes education systems as high risk because of the impact on people's futures and these are some of the things that it says must be in place for an ethical system an ethical ecosystem to exist within which we can be ethical agents it's not for us to deal with all the ethics but the systems have to be there that have risk assessment and mitigation that have been based on high quality data sets that have a full record of how that training has happened so that there's traceability and accountability throughout the process that have appropriate human oversight and importantly the capacity to end a technology, an application if it's deemed to be used for an ethical reasons or likely to be so and of course which is the thing we're all grappling with the current systems it must be robust, secure and accurate now do we have systems like this at the moment in our universities and colleges do we have them, can we be ethical actors without them and if we can't I think we really have to be putting pressure effectively on our universities, colleges and bodies that represent educational technologies and others to be building them, to be developing them with us, to be developing systems that allow us to have agency and care there's a huge developing ecosystem of open development I agree that its relationship to the foundation models is problematic, we can talk about that but still this work is being done it's possible to build a model in a laptop like the one I'm looking at now which is incredibly robust and safe unfortunately what this diagram shows is the brain drain that has been from universities and research centres where the majority of AI work was being done into the commercial sector but still there is a huge amount of work being done work that we could be leaning on but that work needs to be open it needs to be shared it needs to be collaborative as a sector overall we have the capacity to do this I have no doubt at all that wealthy individual universities and research centres are building their own ward gardens they're thinking about privacy and ethics and accountability and bias in those ward gardens but that's not good enough if it leaves half the sector or two thirds of the sector out in the cold using foundation models sending their students there it's just another source of inequity so collectively we can be as a sector key ethical actors we can hold a key ethical space for this technology to be developed and brought forward if we do it collaboratively if we build open knowledge projects that we know how to do like Wikipedia, Wikimedia even like projects like the semantic web from the past but is the sector going to do that? this is the question I ask so I want to close with a different box not Schrodinger's cat's box but Pandora's box sometimes feels like we've opened to allow many things out that we might have hoped had stayed in but at the bottom of Pandora's box we find hope and I think as a sector we're full of hope we're full of ethical commitments and we're full of ingenuity to build better ways of relating with each other and with this new technology and I will leave it there Fantastic Helen Thank you so much for that yes I can see lots of people applauding and lots of hearts I think certainly I'm stunned so thank you so much for that it has been amazing and lots of comments appearing as well what I'd like to do at this date is to open it up for questions please do put your questions for Helen, you can use the chat or you can use the Q&A function I love the fact Helen that you ended on that note of hope because I think as you were talking my heart was pumping a little bit and I suppose I've been around long enough and we go through these waves don't we I was delighted that you mentioned early on I think the type of of the what we're calling artificial intelligence I wonder if you could say a little bit more about that, that would be lovely just I suppose people have been so concerned and have a lot of worries about this and it's why the conversations are so important and it's why we're here today but I wonder if you could just maybe say a few words of that and we'll give a bit of space for people to ask their questions thank you I want to try to explain my take on hype when I talked about the history of AI as a project and it's a particular project it came out of a particular time of techno optimism and it's a project to define both technology and the ways we use it in particular ways so it's not new, it's been around for 80 years the other thing that's not new is text support so when I discovered that this kind of tsunami was coming was actually from talking to I did a series of interviews last year with a whole range of digital specialists but particularly those supporting students and professional supporting students who are international or who don't have English as a first language said translation engines paraphrasing, it's incredible now you should see what it can do so I think there's been a sliding scale hasn't there, there's been grammar, spelling support grammar support, translation paraphrasing and now we have these in terms of their usability I think they are certainly a step up but they are part of a continuum they're not like something we've never seen before what's new is that the underlying technology is just based on brute data processing and that's what makes it so easy for a very small number of players to capture and control it and that's the part of it that I worry about in terms of relationships in terms of our relationships with students and their writing, I think the issues are very much what they always were, we have to explain to students why we want them to write or express ideas, it might not be in writing it might be in code, it might be in presentations like this one, why do we ask them to present their ideas, who are we asking them to become, how do we want them to be accountable for what they think you know these are the questions we've always had to ask students I think accessibility I totally understand the accessibility arguments and I think at the end of the day it's about not having a level playing field rather than insisting everybody has to level to the same exact way of expressing themselves you know and then we don't need all our students to be pumping their money into these engines that promise to let them pass as something that perhaps they feel they're not and that perhaps we should stop assessing for, sorry I went off into a completely different area from hype but yes who does the hype serve it's kind of obvious you don't need me to tell you who the hype serves there are huge numbers of people rushing into this space to sell you things yeah and it's a brand it's a kind of umbrella brand to particular kinds of computational process and that's how I relate to it and I think how we all should we should calm down absolutely thank you for that we have a question in from Adam Levi it's quite long so I'm just going to read it out here he says hi amazing talk Helen I'm from Creative Arts Institution and we have spoken a lot internally about preparing our students for industry a big part of this in many areas is the use of AI for example photobating design processes filtering it's almost expected by employers in these areas that AI is used in a key skill when graduating this seems to conflict with your thoughts on encouraging students to complete tasks without these tools how do you propose we strike a good balance between them thank you Adam it's difficult to produce the nuance and complexity required in 30 minutes but I think I would want to distinguish between general purpose generative AI or synthetic media and professional and specialist which I think is partly what you're talking about here so you know music makers artists writers I'm a creative writer have always used technology both to inspire you know to make to provoke to inspire but also just as a day to day tool and of course our students need to keep up with that I think what I would want to speak again started to counsel about is and there's no fault in this but because of the speed with which these things have arrived we're all experimenting with them and we're experimenting and finding things that we can usefully do with them and we may be forget because that's all we can do what else can we do you know but we may be sometimes forgetting that we're experimenting from a position of having an established practice whether that's a creative practice an academic practice a research practice so we're integrating these tools into a practice that we have acquired in other ways and I think all the evidence just from prompt basic prompting I mean obviously I don't know the specific tools you're talking about in all the creative industries you know that people who have a creative practice approach these tools very differently they have far more boundaries around when they will and won't use them they have far more discrimination and judgment about the outputs of these tools so I just think absolutely if it's a professional practice of course that's where your students will go and where you will go with them that's natural so just would say we are very far behind understanding how people learn when they don't already have an established practice creative or professional or academic in relation to these new technologies and that is all I would say about it Thanks Helen we have got maybe two or three minutes just until the break and there are a few questions that have come in in the Q&A and I'm just going to read Louise Drum's question and perhaps then you can come back and take a look at some of the other questions after we finish up for the break so Louise asks what opportunities do you see for individuals to resist gender to AI in education and she said she's interested in how we can ask colleagues and students to take all of this in and still not feel powerless how can we empower each other in making choices that go against the hype and inevitability it's a great question Wow that's a great question and you know I don't have the answer I mean you know I think in the work that you do Louise and that everybody here is doing we're finding those answers there's not a simple one is there I mean I think I would speak for spaces of principled refusal in that you know I think we need to get away from and I've heard it so many times only 40% of teachers have even tried you know there are whole numbers every discipline is different you know that's another thing and I think I'm a great believer in research I've done in critical digital literacy you know every discipline has something to learn from other disciplines and I think that's one of the ways we support each other you know we talk about how it is for us but I think in many disciplines and Adam's you know obviously there are exceptions but in every discipline in the learning of that discipline there may need to be spaces of principled refusal just as I've talked about world gardens I mean I think we have a moral responsibility to provide those world gardens I absolutely think what are we exposing our students to if we don't do that these technologies have been released with minimal testing no testing on users with known with known biases with known privacy leaks with untested guardrails that are easily broken with an absolute agenda for capturing data so you know we need the world gardens but I also think there is a I mean this is what Luddites actually were doing you know they weren't against technology they were all highly skilled technical people in their own right they were against particular relations to technology and particular new ownerships of technology and I think spaces of principled refusal or principled setting aside that's what I try to have in my writing practice are privileges and resources that we can offer just as you know if we want our students to pass those assessments where the graduate recruiters are saying we don't want students who are using chat GPT in their assessment well how are we preparing them for that how are we preparing them to come across as independent thinkers and as lively and as engaged or as good team workers or if we're not giving them those basis of where this is just shut away but in terms of supporting each other as colleagues you know something like the alt community and its ethical framework is absolutely where it's at isn't it today's like today. Thank you so much and I think you know just picking up on one of the things you spoke about earlier on around the time needed to reflect to discuss etc so look I'm just going to say thank you so much Helen for a really inspiring keynote this morning. I'd like to ask people to show your appreciation once again I can see a couple of clapping hands coming up and much as we'd like to keep listening to you for the next hour Helen we do have to take a break before our next speaker so I'm going to close the session on that idea of principal refusal which I'm certainly going to take on board and I'm good to use that in the future thank you so much Helen and I'll say goodbye to everybody for the moment and we'll see you again in the next session very soon thank you