 Right. Well, since it's the top of the hour here, I'm going to start things off by welcoming everybody. Welcome to the Future Transform. I'm delighted to see you here today. We've got a fantastic guest. They're talking about a subject that's a great deal of interest to all of us, and I'm really looking forward to our conversation. We've been talking about artificial intelligence for quite some time on Future Transform for years, in fact. And since last November, we've had a whole series of sessions based on the massive advances of large language models like chat GPT. And we've been covering a wide range of topics in that. Pedagogy, campus policy, ways of support, challenges, limitations of AI. And today we're returning the topic of ethics. That is, how can we use these in the ways that are best for everybody involved? And how can people of higher education best use artificial intelligence? Now, I'm absolutely delighted to welcome Donald Clark. He has one great advantage, which is that he's a Scotsman. But beyond that, he's also a lifelong educational technology worker, professor, CEO, founder, and a very, very thoughtful and deep commentator. We've got a couple of links at the bottom left of the screen. You should see one as a link to his blog, the Donald Clark blog. And another one is a link to his most recent book, Artificial Intelligence Learning. And I recommend both of those very, very highly. Now, without any further ado, let me actually welcome Donald Clark on stage. Good afternoon, sir, or good evening. Yeah, good evening to you, Brian. Great to see you. Oh, it's great to see you as well. And we found you in Brighton this evening. Yes, despite my accent, from the north of this good land, I am actually as far away as you can get in the United Kingdom, in Brighton, which is right in the south coast below London. I've been here for a long time, though. My kids are English, and they have a different accent from me, if you can imagine that. Oh, that's terrifying. But I know you're not taking up stakes, we're not putting down stakes in Brighton as a political statement. I know you travel a great deal, which is wonderful. I love your photography. Speaking of travel, let me ask you, we have this tradition on the forum. We ask people to introduce themselves by what they're working on next. Now, I'm curious, what's ahead for you for the next year? Well, it's interesting, I've got some focus on the AI thing, I'm completely and utterly focused on those two letters in education and learning. And there's two aspects to that. One is practical implementation. So I'm involved in several large projects, one with the global publisher, and other and other. And so I'm just doing stuff, which I like, because it's important that things get done. There's a lot of talk that people have to do thing. That's that's the first thing. The second thing is to continue with some of the evangelizing. This year, I've been lucky enough to be in Africa and Senegal and places. And I'll be in South Africa in October. But, you know, getting out there, and actually learning about other cultures approach to AI has really been an eye opener for me. But I enjoy that. I just enjoy one of the reasons I enjoy travel is, you know, it's a learning experience for anybody who's out there discussing any of these topics in another cultural context. So I spend a lot of my time in in other countries, even continents. Well, first of all, that sounds wonderful, both travel, but also the doing all the work on AI. And I know work for you means hands on work with the technology as well as thinking about it and planning how can you use an education. I'm curious, what are some of the international differences you've seen in the reception of AI? Yeah, well, there are two big, two big faults, if you might want to call them that in the world. One is Eastern West. So if for those who have never been to China or Japan or Singapore or Far East, you know, what you might call that Confucian influence, the collective view of the world is so radically different from the Western individualistic view of the world. And I think this often results in a quite brutal appraisal of Chinese culture, for example, from the West without really fully understanding the subtleties. And now, I've experienced that all my life. I've been, you know, traveled a fair bit in the Far East. The second one, which is even bigger, which saddens me more in a sense, is the difference between the global North and global South. So I've been good at it for many years. And the disparity in wealth is almost beyond belief. And one of the things that, you know, I'm 66 years old. But one of the things that saddened me in my life is that inequalities are rising, not falling in the world. And our countries in the US and Europe as well. And I think, well, you've heard me talk about this before, I think higher education has played a role in this, you know, creating a graduate class that all of a sudden in my life really, the last, let's say, five, six years, I started to look down on the other half of the population, no, I think that that act of disparagement is a really dangerous thing because you're not going to win them over by disparaging them. And so I need to go off on social media. So, you know, you have the Trump phenomenon in the US with Brexit here in France, or burning Paris down. And that's because one half has taken it into their heads that the other half don't matter much. I think I feel that that's saddened me a little bit. So to bring it back to AI, though, I think we have a chance with this technology. I've been involved in technology for decades. This technology is almost unique in that it's it got absolutely exploded and taken up globally within days. You have 100 million people within two months using it all over the world. When I was in Singapore, you know, I asked the I was doing a keynote there. Every single person had used it. That's not true when I'm in New York. So I think we tend to get other people have less ethical. We're talking about ethics tonight. I think the ethical context is very different around the world. We tend to have that very individualistic, you know, privacy data, I'm not sharing with anybody view of the world. They don't care much from that, you know, they think actually, it's the collective aim is in the essence of the technology. And that's what makes it great. The fact that you're, you're allowing the whole world to share in this experience because it is, you know, the these large language models have taken this mass of human culture. I mean, to read this stuff would take you 22,000 years. That's TBT 3.5 22,000 years, eight hours a day to read the text upon which it was just trained. This is the whole of human culture. There's an unique moment in our species. And yet the reaction is always, I'm not going to take my name in really much about your name. So I think we get maybe one of my themes tonight is that there's a lot of moralizing, which is very different from ethics. Well, I see, I see. That's that's a good point. Well, there's, there's been a lot of moral, I try to be neutral here, a little moral conversation about those there's the arguments that that some of the large language models are are acting immorally or amorally in the size of their training set. And the content of the training sets, rather, some argue that that there are problems with the content that the content can be offensive or can reproduce pre-existing biases like the North South bias you mentioned before. And others argue that there is a problem with privacy. And again, I've also heard this more from the US and Europe than elsewhere. But there are other moral challenges, too. People have complained that these are for profit companies that are basically raiding the commons for privatized goods. But I know that you're very bullish on it. You think that we should be using it more because it's extraordinary. How do you how do you square these? I mean, what's what's the what's the best ethical way forward? Yeah, well, just to there are a number of skills you've set up for me there at the end of the alley. I mean, let's take the last one just in terms of my memory working recalling those. I mean, you know, open AI which launched this thing is an up for profit. It's an up for profit in which the chief executive has no equity stake at all. Cap not for profit, but hanging off it so that because these programmers are rare beasts, they're geniuses and they have to be rewarded. So I think this is unique in mystery capitalism that one of the greatest pieces of technology has actually been owned, launched, executed by a not for profit. And the US had to take hat off to the US here. I think after, you know, the plan of a bush had that notion, which like if you've seen open hair on the film zone of bringing government with the higher education system with private enterprise together. And all of these entities really have that feel about which is why the US is trying to. I think the first one about the data set, well, it's going to happen. The internet happened to arise in the Anglo Saxon world. English is the largest data set. On the other hand, is there any piece of technology you know that's available in so many languages that would translate into Walloof in Senegal, where I was recently, that is looking towards heading towards now 150 languages going up to a thousand. Wow. I think, you know, it's folding those other cultures and languages into the omelette as it were very, very quickly indeed. What does a typical higher education institution manage in terms of cultural adaption and languages? Institution teaching. Yeah, that's a good question. That's a good question. Yeah, so I think some of these, you know, when you widen the perspective out here, you find that a lot of this is very particular perspective on an ethical or moral issue and people ride in on their moral high horse, you know, but when you widen the perspective, you find that there are other views of this. I think one of the things that is really interesting because you asked a very good question there about, you know, whether it produces good or bad stuff. Well, of course, it captures language and language has good and bad stuff in it. It would be ridiculous to expect dialogue with our, with the history of our culture to exclude bad things. You couldn't even have an ethical discussion if that were true. It would be impossible because people mistake ethics for moralising in that ethics is really a study of the good and the bad. So you can't in higher education just limit the conversation through let's say plagiarism, you know, go down to black hole plagiarism and sniff about on that cat and mouse game and then ignore AI's benefits in terms of teaching and learning. But that's exactly what's happened. That's not an ethical discussion about AI in higher education. That's a narrow teacher orientated perspective because no one is thinking about the massive benefits for students here, especially. I mean, we had a really interesting paper published at the beginning of August, which which showed that academics are actually second guessing if they even suspect that students using chat GDP, they're marking it down. I hate that word in academic integrity. I think it's a conceit. Where was the integrity when all that 20,000 people are working in Kenya and s emails producing essays for your students. Nobody cared about integrity much then. We know it's going on. We've known for decades. So I think sometimes we get wrapped up in the minutiae and boot notes when the bigger picture has ethical benefits that are fantastic. You know, I don't drive a car. This comes as an astounding thing for most Americans when they discover I've never driven the car. But often at conferences, you say, I've got a piece of technology in my back pocket here, you know, it's going to give human beings an astounding level of freedom unknown in the history of a species. But one and a half million people will die horrible mangled deaths every year with another one to two million horrible life changing industry injuries. Would you accept that piece of technology? Do that conference every goes in no way. And of course, it's the automobile. Perfect. So I think, you know, we always have a sort of utilitarian view of technology and we come to an accommodation with it because it always comes with good and bad. But if the benefits outweigh the downsides, we live with it as we have with the automobile. There's another constant phenomenon here. And that's when it because I'm old enough to remember a few like your good self, Brian, you know, I always amuses me that the first piece of, you know, negative reaction to technology was to the sundial by a guy called Plotis in the third century AD. He thought that the sundial would destroy civilizations as we knew it because we cut things up into ours. And then another one that somebody pointed out to me, and it's true, this because I remember this when the photocopier came along, people had education went nuts. They thought it was the end of the copyright. And of course, now everybody now, well, we don't photocopy now because we have digital copies. So the word photocopy was actually in the Soviet Union, they made it illegal. It didn't show up in the Soviet Dictionary. So you get the same reaction you know, there's some I wrote a book called Learning Technology with an outline. It's one of the essential features of technological change is that you have a negative confirmation and negativity bias reaction to it. And it's been extremely I partly because people don't fully understand it therefore induces a little more fear than say the internet did. Well, if you lived as long as me, you saw it with everything. I saw it with smartphones, computer games. You know, you name it. We react with Wikipedia. And people did want to ban that stuff. You know, they we we have short memories. And of course now we would live without it. And then we have these ridiculous demands, I think, you know, like all the I should be transparent. Well, you better forget about Google, Google Scholar and Google search then you better cut them out your institution because they're not transparent. They're proprietary algorithms owned by Google. And I think people are getting, you know, their demands on the regulatory side and the ethical side are so extreme sometimes you wonder whether people are thinking at all about human benefits of the technology. That's fan bullish. Well, that's that's a terrific, terrific statement. And I appreciate the positivity of this and also your historical breath. Friends, if you're new to the forum, what I've just done is set the stage by asking our dear guests a couple of quick questions. But now I'd like to turn the floor over to all of you. So again, if you're new to the forum, look on the very bottom of the screen where that white strip is running and either click the raised hand if you want to join us on stage. You don't have to have the beard to join us. We encourage that or click the question mark to type in a Q&A. And as we've covered a lot of ground already and we have more to cover and we do have a stack of questions that already come in. Well, so let me start off with a question from our good friend, Brett Anders, who's coming to us from even later in the day in Armenia. And and asks, do you find a big difference between different countries and cultures as far as how they think about AI in education? So not just AI in general, but AI in education. Good question. Yes. So I had the recent experience in Singapore and it sort of surprised me because they were already doing it. You know, I was going in thinking I was evangelising the AI and education thing. And then people in the audience were giving me great examples where they had already set it up within their institutions. Well, so and therefore, I think I think there are two great focal points for this now. And that's China, the Far East in America and the U.S. Europe's nowhere in this. Europe's lost the plots like the year. Well, I think I think Europe's got it in their head that they're going to regulate the whole world, but it's only five point seven percent of the global population. I think they think they have a bigger sort of saying things. But when I was in the Far East in Africa, nobody cares about the EU, you know, that it seemed like almost irrelevant to them. It was so small thing. But the second part of your question was an illiteracy. I'd like to make a little comment on that because I think I think educators tend to always want it's like digital literacy. The solution to everything is a course. We must teach people about illiteracy and so on. Actually, these kids, your students probably know more about it than you. They've been using it for longer than you. They're more relaxed about it than you. And sometimes find that that default term, AI literacy, worries me because it's always a sign that, oh, yeah, we know we have knowledge and we pull your open your mouth and put it into you, you know. And that's not true in AI because AI is actually that I find very a very poor understanding of what large language models are and do in higher education, for example, a lot of rejection head and sand behaviour compared to, let's say, the corporate world or in some cultures of East, as I say, in Africa, another really interesting reaction was bring it on, you know, we're poor, this is free, it's amazing. That's pretty good combination. Are we are really going to impose our higher education model in Africa where hardly anybody goes to college? You know, is that what we're really going to do? There's a really interesting example of this in reaction to this question. When I was in Senegal, I was involved in a debate with a good friend of mine, Michael, who was a minister in the Kenyan government. And he was furious, furious at white saviours, as he called them, coming in and complaining because these poor black kids in Kenya were being paid £1.60 to $2 an hour by open AI. He said, do you realise that's 40% higher than the average basic hourly wage? And how dare an academic on 90 to 100,000 pounds a year say deny these black kids the opportunity of getting the first rung on an IT ladder by doing that type of work? He thought it was slightly ridiculous and I wholly agree with him. I think that's an example of what you might call moralising and not thinking properly through the ethical issue. That's a good answer. That's I've heard that that complaint as well. Yeah, you get it all the time. Well, we have first of all, Brent, thank you for the question and we're going to circle back to you because I do recommend everyone should pay attention to Brent and learn from him and his work in AI literacies. We have a question from Carly Brady. We just popped this on the stage. And Carly says, for administrators in higher ed, how should we address ethics in AI with our faculty and students? Should ethics always be part of the messaging? I think it's OK to have ethics as part of the messaging. But in so many examples, I've seen so far in higher education, it's the whole of the messaging which strikes me slightly ridiculous really, because the ethical threats are minor. They're like any technology has ethical and ethical dimension, but I wouldn't lead with it. Sometimes you see the whole document is called an ethical approach to AI or whatever, you know. And I think the students laugh at this slightly. You know, I think, you know, it's a chatbot. What are you worrying about? People like a short goal at Georgia Tech have been using these things as teaching assistants since 2015. So what's on it? He's a great guy, you know. And, you know, in terms of teaching and learning, I think that has to give brands using it with very large numbers of students. And I think we should be leading with the teaching and learning stuff. But of course, what gets all the attention is the plagiarism and Essie's thing. And that's where I think H.E. is going badly wrong, because, you know, lecturing is easy, teaching is hard. Essie setting is easy, assessment is hard. So we've had the easy default because most academics are researchers. So we've defaulted to this fossilized, very simplistic view that teaching is lecturing assessment is Essie's. That's always been hopeless. Now it's been challenged. And I would hope that we sort of do an upgrade to H.E. 2.0 here and get a wee bit more serious about teaching and assessment because the rest of the world is miles ahead on this stuff. Nobody, nobody asked, in the workplace learning, nobody would ever ask somebody to write an Essie. There's nobody in the real world does it, writes Essie's, you know. So I think, you know, what's saddened me that over the years has been this notion that you send your kid off to school at age four or five. They pop out 20 years later. Oh, by the way, you've got to do another year. You've got to write another Essie, do a master's degree. That's another twenty thousand dollars. And there's about 20 years really reading text and writing text. And that's about it. I mean, it really is quite astonishing when you think about the education system is so focused. We're drowning in a sea of text now. And yet it's so free and so easily available now. So I think maybe this is a challenge to the whole of the education system. But in answer to your question, do more ethics, have ethics in there, but put it at the end like you would with any technology? You know, did we did we launch, you know, the internet with ethics? Would you launch Wikipedia now with the ethics of Wikipedia? No, it's just a given. We use it as a tool. That's what will happen with the eye. That's what we're doing, just as a tool. Good question. Thank you, Carly. We have more questions piling up, which is great, which is what I love. And we have a couple that are about modern AI in general, not just about their educational ideas. This is one from Corey. Hang on a second. We pressed the correct button. Corey Katz asks, why should we expect the benefits to outweigh the costs when it comes to all of us? Again, sorry, LLM, large language model. Yeah. Well, by the costs, by costs, there you have to break down the costs, the cost side of the equation here. There is the cost of training the model, these models, which is actually where most of the costs come, because that's eye-watering costs, compute costs. Then you have the storage and distribution costs and so on. But that's way, way cheaper and better than the type of costs and energy use we're going to have that you're concerned about, for example, Brian. I mean, we have, let me give you some examples of where people don't think about the cost. In Europe, we have a 28 billion euro budget on a thing called Erasmus, which is flying kids around Europe. They're different educational institutions, flying rich kids around Europe and academics. Do academics stop going to academic conferences? Of course they don't. I think we should be looking at those things rather than the base costs for creating the foundational models. All the cost isn't creating the foundational model of the large language model. But you have that, then you have the costs, the transactional token costs, but that plummets. And I think the benefits, in a direct answer to your question, I think the benefits hugely outweigh the costs, because you have already hundreds of millions of people using this to massively increase productivity in their own lives, in their own workplaces, and in their own studies. And that benefit is such a huge acceleration in terms of productivity, that it's definitely worth it. So what is the big increase in productivity? Is it simply reducing the time to task on doing things like writing in images? Yeah, that's a very good question, Brian. There's a very good paper from MIT on this. I wish I don't have time to share it with you, but they did an actual test. It was about, I think the sample was about four hundred and forty four people to be exact. Split them into two groups and one group with chat GPD, the other without. Now, what you get is this massive increase in time saved, the first point you made. But what surprised them was also an increase in quality. Now, that's interesting. Because time and time again in my own experience, I would confirm that that's true. I use this thing all day every day and we're using it to create courses. We're using it in learning environments, all sorts of things. So you get I think there are two big dosies or hits you get. One is the time saving, which is massive. I mean, last night I got thirteen thousand seven hundred word in a document in Dutch for a big learning program. It took me five minutes to translate it perfectly into English. Now, I know how much that cost and how much time that used to take because I used to do it. You're talking about doing things in minutes that used to take weeks and months. So the time saving is massive. Several orders of magnitude greater than things we've had before. But what's really interesting to me is the increase increase in quality of the output. And I think this is true for learners as well. Interestingly, you know, when I'm when I'm writing stuff now, I'm constantly going to this thing because it gives me beautiful structured nuggets, which I built. So it takes a lot of that spade work out, you know, let me find the answer. What is that statistic? It just gives me in a format like a spine, which I can build a body around. And that's how students use this. And so those are the two big things. Yeah. Well, that's that's a great answer. In fact, I just did this in my class this week, where they were developing a learning, a learning program. And we ran through ChatGPT and they were astonished to see how much stuff was there and then they could respond to it. Yeah. And go back. We we have another question that follows up on this in particular. One dimension has to do with one resource here. And Arthur wonders about how does the water and power consumption to keep these services up in fact, the potential benefits to humankind? So, you know, the electrical footprint is pretty large, especially for the initial training. And water is part of that. Well, you know, one thing that struck me that I think the world quietly forgot because the two things happened almost simultaneously at the end of November 30th, ChatGPT 3.5 was launched five days later on December the 4th, I know because my birthday is the 5th. The first net gain in energy was came out of California infusion. Now, this was a really interesting thing that almost no press. But for me, it was far more worn than AI because the climate change is the most pressing near term threat to our species. And I think and we've also had a subsequent people being following this a subsequent improvement. So this is starting to accelerate into what people are looking at now commercial application of fusion in the two thousand fifties, which coincides nicely with the Paris Accords and around climate change and so on so forth. So I think in a sense, if we have a chance on climate change, embrace AI, AI for those who are interested in the fusion thing, AI has been used extensively in the control of the plasma, which is a necessary condition for for fusion itself. So and then in other areas like health care and so on, you just have these astonishing leaps like alpha bold, you know, it would have taken a PhD four years to one protein. Now we have you can literally do thousands if not millions of proteins. I think deep mind calculated it was a billion, a billion hours of research were saved by that one algorithm. So these these are the exponential leaps we're seeing. And I think in answer to the question, which is a good one about your water, I don't know enough to answer convincingly on that one, to be honest. But I think we're forgetting there's a good line here, which I've started to use, which is AI is the worst it will ever be at the moment. And look at the difference between three point five and four and actually even between four and March and now you've got this exponential increase in accuracy, the provenance of the material. So when I use chat to PD for I use the Wikipedia plugins and so on to make sure I've got double check on accuracy. So it's all coming good really super quick. And what we have to stop doing is looking at these examples, look, look what it did now, because it's usually chat GBP 3.5, because people are too mean to pay the $20 a month. And so you get mad, you know, Twitter every day on Twitter, you get, oh, look, it doesn't do this, it doesn't do this. And it's usually an old version or the famous example, which I laughed at was the New York Times article, where it said that, you know, chat GPD told me to instruct me to leave my wife was there. Now, suppose now I was suppose I was in a bar in New York and a stranger sat next to me and it started asking questions about leaving his wife and look at them go, yeah, you probably should be leaving your wife or she should be leaving you. And it was a classic, every programmer knows this, it's a classic garbage in garbage out thing. Why did it get big billing in the New York Times? And, you know, millions of retweets, because people want to see the negatives. Yes. And of course, it's a failure to understand what the technology is and what it does. There's a brilliant observation by Ian Bogost almost a year ago now, where he was taking a look at chat GPD and he came to conclusion that it was something like a musical instrument, that you were playing the Internet. And I just love that that idea, that phrasing. And that's going to include some some crazy stuff. I said this is I think there's a lot of really interesting, subtle reflections on what this thing is. I wouldn't say to me it was an Italian friend of mine said she felt a bit sinful when she was using it because it was like a goat-like creature, an angel or something, but it was quite freaking her out a little bit. But I think to be more down to earth on that, I think there's some really useful stuff now that's being explored with people like Wittgenstein and Language Games, because these large language models are actually capturing more than just language. I hate that phrase stochastic parrot. It's not mimicking or sampling anything. That's what parrots do. Actually, it sort of has an understanding of language games, because when it captures the language, it captures those different species of language as well. That's why you can say, chat GTP, I want you to be Brian Alexander and I want you to be a brilliant teacher and start talking to me about climate change. You know that it can take those roles on and those styles. Now, this is what Wittgenstein explored for many, many years. And I think that's what it's starting to do. People like Gordon Pask, Fygotsky, thought all our development was through the phonological loop and that notion of language. I think we're funneled. I think we're now coming I'm coming to the conclusion, which I didn't hold even a year ago, that intelligence emerges from language as opposed to language being a feature of intelligence. In other words, we think in languages, Wittgenstein always claimed, you know, he thought think that that was true, that language we we think in language and therefore we have natural limits to language. And the AI is the one chance we have of breaking those limits. Let me let me just quickly pause and get meta for a second. First of all, thank you to everyone who has come in lately. Our population has swelled a bit. So greetings everyone. And the chat is on fire. You have fans here. Carolyn says that of our Bush and the Constinct. This is a great day. And again, if you'd like to join us on stage, in fact, we have more questions that are coming in. And I'm really hoping that we get a chance to get to all of them because this is getting just really, really terrific. We have, let me bring this up. Let's say a really good question from John Hollenbeck, who is now almost delirious with the delight of summer in Wisconsin. And John asks, I know it's used the word learning in your book titles rather than teaching or educating. Is this intentional? It is indeed. Yes. I think remember that most learning takes place in the absence of the teacher. It's a great error, the premise that learning is a necessary, teaching is a necessary condition for learning. I mean, I haven't I haven't had a teacher for 40 odd years. You know, I've learned most of what I know in those 40 years. I think it's that when you're in an institution, you sometimes get carried away by notion that all learning has to involve teaching in some way. But to be, but it's a good question because in my book on in all four of my books, actually, I make that distinction between teaching and learning. Now, teaching is a great skill. I'm not too sure it's practiced that well, to be honest, because it tends not to keep up with a knowledge base and you know, and so forth when you're on time. But I think there's a distinction between the two. But I think learning is fundamental here. You know, I got in my book on learning technology, I go way back to the beginnings of language and learning, you know, and I have a stone axe behind me here. Awesome. So this is it's probably, you know, this is a teardrop axe not far from my house, actually. And I've actually met one of those. It takes about 500 hours to do that. And so you're talking about people half a million years ago, just along the coast here, who you actually have a horse. Half a million years ago, that was a different species Homoheidelbergis. But now they needed teaching there. We know that we know that they needed language as well, because you can't actually teach somebody to do this without telling them things. You can't really do it by imitation. So I think this has got a long history here, but it's quite, there's a really interesting consequence of that question. I think that the pendulum has been we did need to human teachers. I think the pendulum has been as technology has improved, especially into AI and the internet, the need for the teacher, the human teacher has lessened. I have no need to go to a human being for anything now by and large, unless I was maybe getting coaching for my tennis server, whatever. And I think AI, this is what I'm working on now. I think I have this vision. For the first time on the horizon, I think we have the promise of a universal teacher, which is not a human being. AI already almost has a degree in every subject. You've got to chat to you before, ask it anything. It has more knowledge than any human being at a certain level. It actually is quite a deep level when you explore it. So it has more knowledge than everybody else. It can speak more languages than anybody else. I mean many, many more languages than any human being on the planet. What's left in this human exceptionalism thing? You know, you know, Copernicus threw the rock out and Darwin came along saying you're just an animal. What have we got left? Now, we often say, well, teaching. No, I don't think so. Teaching is scrappy. Forty percent of students don't turn up to lectures. You know, it's all a bit messy. I think we have a chance of building good teaching practice, proper pedagogy. And we've been doing this. I've been doing this for the last six months into the models so that when it produces content or starts speaking to you, it has good pedagogy built in. You see what I mean? I think that's the next stage. So, you know, people like Khan Academy are doing that. Lots of people are working on this. I think it will happen quite quickly. Where a learner would prefer to have that because it will give them, it will accelerate learning. It knows everything. It's a good and brilliant teacher. Then the only bit that's left is emotional reading, you know, that human interaction. Even there, if you look at Apple's Vision Pro and the eye tracking thing, I can already recognise emotion in a human face greater than a human being. I remember this is a huge variable for people with extreme autism, for example, can't recognise any emotion, hardly any. So I think this teacher could recognise emotion just by image recognition, but it can also display emotion. I've just, you know, the picture you put up for Shindig here, that was after a session I had in London, where my facing body was captured and my voice, accent and everything was cloned. Wait, wait, they cloned a Scottish accent? Yeah, they... Whoa, whoa, whoa. That's breaking the speed of light there, my friend. So you can now, you can see where this is going. And then now, in the studio, that the volumetric one, that could be a 3D figure. So we are now at the stage where I can, you could speak to me as my avatar and it would speak to me with my voice, a sort of deep fake teacher. But it would be better than me as a teacher because I can embody all the good pedagogy into the bottle. So your digital twin could be a better teacher than you could ever be. If you see, if you get this. This is one of the questions I've been asking is, what happens when our our our inventions exceed us? You know, how do we how do we handle this? But let me, in order to help handle it, let me let me bring in some additional firepower. Let me bring in a fresh from Armenia, a good friend, Brent Sanders, Brent Anders, excuse me. Hello. Good to see you, sir. Good to see you. So this is quite an honor. I've been following Donald Clark here for for quite a while on his blog, as well as on Twitter. So I kind of I kind of feel like I'm meeting a celebrity. I deny everything. So I got a lot of different comments. So let me let me. Before I ask my question, I want to throw a couple of things back at you, right? So first of all, very refreshing. I love hearing your thoughts on all of us. It matches a lot of my thoughts as well. You know, I I'm a director for the Center for Teaching and Learning here at my university. So I have to very much defend the humanity aspect, because I do a lot of presentations and I get instructors, faculty at all levels that are very concerned, very worried, is my job security will I take over? And what I tell them is very much that you know, it's it's not this matter of tomorrow it's going to take over for you. But you should be somewhat worried, at least on some level. And that worry is the exact type of stress that all instructors should have in order to make them better, right? Should be that that stress of yeah, I should be better than this computer. I should be more motivational. I should provide the other things that I currently doesn't do. I should know my students enough to be able to bring up a relevant point that matches what their major is or that matches their background. Those are some things that I can't really do, at least in the class form, at least in the format that we have now. So I think there is some some great benefit to sort of push on those aspects of what you're talking about. So that's great. OK, so I don't want to take up too much time here, but I definitely want to ask my question. So the big thing that I get and this is kind of circling back to what you were talking about before is instructors get so fixated on this aspect of I have essays and I'm worried that they're cheating. So I'm going to go through all this A.I. text detection plagiarism. I always start off with making a big argument of don't even bring up plagiarism. Let's just focus on if they used A.I. that's cheating and then we don't even have to get into plagiarism, right? But then I always push back on that. I created this thing called the share technique, right? That breaks it down and organizes it as far as what they can do to their assignments and assessments to look at how it can be modified somewhat going beyond just that essay. Is essay the right answer because it isn't all the time, right? Should there be something more? Maybe instead of a 2,000 word essay, now it's a 500 worth essay and there's a presentation or maybe there's just a Q&A dealing with that essay that's done live or maybe it's also an art piece that they present and they have to talk about it. So there's all these different aspects that I try to put together in that share technique. But my question to you then is what else can I do to the instructors that are wanting to push this essay aspect because that's very much a fixated thing within higher education because it's been that tried and true thing if you know, if you truly learned it, then you can write about it. And the big fear that I have is I have a lot of instructors that are like, Oh, I know, I'll have them just write it out in class, right? So now that's the test. The test is writing it out in class and I have lots of problems. I have no problems with, OK, maybe we'll do a quick little paragraph or, you know, some small practical exercise, but a full essay. I don't think that's the answer. So I'd love to get your thoughts on that. Yeah. So the first reaction is, you know, I've been really having me out and about in the real world and higher education and community college and they call them further education here in the UK. So I've been a lot of different types in schools also. What has really impressed me are the people in the middle layer in vocational colleges. I think you call them community colleges in the state. So and because I was in Glasgow, one of the biggest colleges in the UK and everybody in the audience was already using all the teachers because a good friend of mine, Joe Wilson, had actually put one sort of moment of truth or place in his institution where all faculty could go use the tools, get familiar with them and then come back to students, you know, fully sort of, you know, aware of what it does, what it does, you know, the fact that, you know, plagiarism checkers do not work. Don't use them. That's exactly. And then there was another brilliant example by I was speaking to Peter Rochet and Devin Walton and they're from Northern Essex Community College in Massachusetts Middle Sex Community College and they had put forums up for faculty so that you could just go and you could play around the bill, of course, isn't this stuff for them, you know, to allow faculty to have that pre-prompting type experience with this stuff, a bit of handholding and then come back to students. I think that's a terribly important that we get faculty off a with the limitations of this thing, the fact that the plagiarism checkers don't work, all that sort of stuff. Then when you come back to students, you're absolutely right. I really look forward to the death of the essay. I think essays have a place, but the idea that the two primary pedagogies, which I despise really now, so I'm tired of this, that are the lecture and the essay, medieval entities. You'll play to a lecture. We had two and a half thousand years later, we're doing the same thing. 40 percent of your students don't even turn up. They pay what hundreds of thousands of dollars for a degree and they don't turn up for the meal. It's sort of bizarre in a way. So I think I think finally this challenge, because I think I really do honestly believe I'm getting a bit political here, but I do believe that internally higher education is unreformable from the inside. I think it's reformable by pressure from the outside on costs. You see that in the brilliant word that Brian's doing on attitudinal shifts in the U.S. towards higher education. Very interesting data. A massive pendulum swing towards negativity, which is worrying really. But I think then when you deal with the students, you have to just be more sophisticated with assessment. Actually corporate people do this quite well. You know, when I'm training, when you're training a pilot, they go on a simulator. Why? Because pilots go down with a plane. Students don't go down with faculty. Maybe we should think maybe the students go down with and I don't teach them well, they're in trouble. They may fail here and still have the debt. That's a moral, morally bad thing increasingly in health care of the use of simulations, action oriented stuff. As I said earlier, higher education is almost drowning in a sea of text and text drawing analysis. And I think it's time that it swung away from the essay as its primary form of always teaching and assessment and certainly away from the lecture. But and I think this will finally be the external pressure we need. Going back to one thing you just said earlier, though, about the unemployment thing, which is a real fear. Now, my father-in-law was a minor. Most of his life underground working shifts. No, remember, academics complaining that much when all the minors lost their jobs. I can't remember academia. In fact, academia helped take all those jobs in the Midwest and export them to China. I was there. I heard I heard those CEOs talk about downsizing and outsourcing. So the business schools were responsible for this. And so we shouldn't be really surprised that some when we always thought that routine working class jobs would be automated by robots. That turned out not to be true. We don't even have a self-driving car. Actually, what's happened is that cognitive skills are being replaced. Now, suddenly when it's us and our children, we go into a moral panic. But I can't remember the moral panic when the business schools from higher education were recommending you outsource manufacturing to China. And we wonder why we have Trump Brexit, the burning down of Paris. So I am very political in this. I think higher education needs to get its act together in terms and stop talking about academic integrity and reflect on whether what we're doing is right for humanity as a whole. And so I often get political in this because I think in my lifetime I've been saddened by the fact that it has played a role in increasing inequalities and that the graduate class has flipped and suddenly looked down upon people, the deplorables, the despicable. And we wonder why we are Trump. It doesn't surprise me in the slightest. Well, that's a that's a fantastic and Brent really glad to see you. Yeah, thanks for this question. And we have great to see you. If you want to if you want to post a link to your share protocol and to your YouTube channel, please, Brent, please throw that in the chat. So a few people who don't know you can. Friends, we have about nine minutes left, and I want to make sure that people get to put their questions to our great guests. And we've had a great time. And we've had a I'm going to try and squish a few of these questions together. This is one of the kind of back to the pedagogy question in an interesting way. This is from Kenneth Olin. He says, if we're asking the language model to our preliminary thinking forms, how do we teach students to do that preliminary thinking? Interesting phrase, preliminary thinking. I mean, preliminary thinking, I'm trying to focus on that. So I'm drawing my own experience here. I remember when I was doing postgraduate research, OK? And I had to go to a library and what I spent about six months in my life walking up and down library shelves, cooling out journals and so on. Proliminary thinking is actually the chore. You know, when you're writing an academic paper and you have to do all that sort of, you know, meta studies, type stuff, it takes forever and actually I can do it in seconds. I was speaking to a publisher, a very big academic publisher. We're doing a big project with them. They are absolutely fearful of the fact that most academic papers that are just theoretical or armchair papers or meta studies can be done by AI. They're already receiving them. So by preliminary thinking, I think there is a huge amount of waste. I think you should have six months taken off it because you don't have to walk when I had to do one to walk up and down the shelves to find books. I walked for miles. Now, I think preliminary thinking. First, an undergraduate level is aided by AI in the same way that why is it so engaging? People talk about engagement and learning. There's nothing more engaging than a million people in two days, a hundred million people in two months suddenly using something that when they type something in, blows their mind. And the first feeling you get is, wow, that's useful. That's useful because it used to take me a long time. So just getting up the first couple of rungs of the ladder, what we might call preliminary thinking, is exactly what students need help with. The blank sheet of paper, essay set at the top, was always poor teaching. Because an essay has dozens of different skills, writing skills, structural skills, research skills. To load it all into one act is ridiculous. And of course, this, the we know research is quite clear and is that even the assessment by faculty of essays is all over the place with regard to biases. So what, what names at the top of the paper, you know, do they agree with my hypothesis or not? Let's be honest, is there a student alive who writes a paper in direct opposition to the book that the teacher wrote? I didn't. A very good friend of mine did at the LSE on Brexit and he got distinctions on every essay and got a fail on. And this is a really smart guy. So he actually put it to the test, he actually did it. And so I think, I think we have to, I think the preliminary, it's a good phrase that, preliminary thinking. I think that we will build the first steps to the level that's what we should be helping students with or allow them to help themselves and then judge them on the true what we I mean, I'm really suspicious about the phrase critical thinking. I don't even know what it is, to be honest, this is a skill, because I think it's very domain specific. I don't think it's a general skill in any way. But I, I really do think that we should be giving students more support on those types of acts. You feel lost. I think a lot of students in their first six months feel totally and utterly abandoned and lost. And which is why you have that great big dropout lump in year one, one of one courses, the first stats course psychology students have to do the first math course. You know, they struggle because they've been given essays and doesn't help them at all. Unless it's an endpoint, you know, this endpoint essay learning is a process and it's a process that needs repeated iterations and feedback. The essay with no room for that. It's a, it's an endpoint. It's a tombstone. It is. Yeah, it's too late by then, you know, for some people. Kenneth, what a great question. Thank you so much. I love the following community. We have another take on this, which refers back to a previous session of ours. We'll put the door, Paul asks this, does open source and the way it fosters transparency and innovation, economic activity and democratizing access. Does that address AI ethical evaluations? Yes, I do have an opinion on this and I may be completely wrong because it's my opinions have varied over time on this. First, I was at first, I was against the open source thing because I hadn't got my head around some of the dangers on deep fakes and so on. Now I've become a bit more of a fan of it, mainly through the work of Yann LeCun and Yann LeCun is the head of AI at Facebook, but one of the three sort of touring awards, no world prize winners in AI. And I hugely significant figure here and Facebook, who would have thought that Mark Zuckerberg would have been the hero of the day? But Mark Zuckerberg has launched Lamma, which is an open source large language model, which anybody can use. We've been downloaded. We've been using it. And I think there's a great deal of wisdom at this stage in pushing for more open source activity on this front. There is lots of it. If you like the people I know, you know, the techie people I work with an AI, I mean, when buying large, these people work, I asked this question, what percentage of the stuff you're using is open source? And the answer came, I thought about, you know, 60, 70 percent. That's generally true of coding. So I think we should continue with that tradition until perhaps we have a red flag danger that means that it could be, you know, a bit worrying. But until then, it's a very good question of this because I think the other word was in there which was absolutely spot on. That's what democratizes this. And we could easily hold on to this in the US or California and let the big companies rule the roost on this. Or we could take a more global view of this and share it with the rest of the world, which is exactly what open source is. The internet and such to open source and it works beautifully. Well, you've written movingly both in your blog and on Twitter about how you see this as a democratizing tool. I think I think. Yes, it is. We have another question, which really may actually be a good wrap up question. I just love this because this is such a practical question. And friends, a frenzy asks, if you suddenly find yourself in a higher ed classroom two weeks from now, how would you use AI with your students? Well, before the class, of course, I would be and I would say that what is coming here is the opportunity for teachers to have teaching assistants. I think it's a really big one. And that's why I love Ashok goal at Georgia Tech. I shared a keynote with them recently in Europe and he's doing some he's been doing this for years. I mean, the teaching assistants not only did the students not know it was a teaching assistant, the bot, they put it up for a teaching award. He's now he's now melded these old chat bot with chat GBT and got a two plus two equals five effect. Why are we not doing this? Why are we? Why are we slaving, you know, why are we teaching adjuncts as slaves to do that work when we could be accelerating learning in students? So I think in many ways I wouldn't be. I wouldn't be in a classroom using AI. I'm not a big I don't like classrooms anyway, you know, like I haven't been in a classroom for 40 years. I mean, I don't know what this obsession of being in a room with nothing in it. No context, nothing. You know, when I go into lecture polls now, I go, well, it just seems like something from a bag on age, you know, like because I was in Greece recently, a peterous, and I sat in the biggest lecture theatre in the world, you know, and it was around then and we copied the exact structure and seating as then. In fact, we we even have a we even have the pulpit, the lectern because it actually got we just threw theology into there as well. Teaching in sermons. That's largely what higher education doesn't teach. They preach and they they tell you what the truth is and you better listen. Well, that does that brings us back to the ethics and politics of this. Yeah, but there's another element here. I think in terms of students, I would really be encouraging pre-work using AI tutors and AI created content. So we're we're involved in a few projects in this now where you can create a great action orientated course in health care is an area we've been working in real action orientated stuff as opposed to overly theoretical stuff. I think that will be aided by AI's push in 3D as well. I've just got a book published on the 3rd of September on learning in the shift from 2D to 3D. I think Apple has broken the mold in this. That will start to happen. All fueled by AI, the 3D metaverse thing will work will happen because of AI. I think we're looking forward over the next 10 years to some amazing breakthroughs in simulations, in content and tutors and coaches that really help students because they personalize. So I wouldn't be using AI in a classroom. I would be looking at AI outside of the classroom to aid the whole online teaching at a distance phenomenon. And I, as a teacher, would like to mop up as it were. And go at the high end and clear up any misconceptions people might have, help them with that top end research. Yes. Any academic world academics don't like teaching one or one courses year in year out or marking the same essay they set five years ago. Why would you want to do that stuff? Automate as much as you possibly can so that you really do teach and allow students to learn using the technology. It was an interesting question of teaching and learning earlier. I think we can start dispensing with a lot of the bad teaching and encourage a pendulum swing towards teacher, towards learners who learn. That's what this technology does. And remember, this technology is a learner. It constantly learns. And in fact, we have this technology because we studied the human mind. I've written extensively about this. We forget this from Hebs onwards, who looked at the Neural Network and all sorts of geniuses who worked out the neural networks. They're not exactly the same as the brain, but there's a similarity in the drew inspiration from. From teaching and learning. It's learning theory, really. Assebus, you know, Dennis Assebus of Deep Mind went went back into the university system, wrote two brilliant papers on learning theory around memory. These people are deeply soaked in learning theory. And therefore we have this congruence between AI and what we regard as good pedagogy that has produced this piece of technology that has Godlike qualities in teaching potentially not I mean, this will take ages. But we should be optimistic about that and see it as a positive thing. Do you remind me of one of my favorite modern novels by Richard Powers, Galata 2.0, which is about an AI trained at a university. But I'm but I'm with great regret, Donald. I have to pause this right now. We've we've breached the hour mark and we have to wrap things up. But I actually have to literally leave this room and rejoin my students. So I which is all too much all too appropriate. You've been fantastic, my friend. It's just terrific to think with you. You're just an intellectual, adrenaline shot. What's the what's the best way for people to keep up with you on your blog and on Twitter? Yeah, yeah, well, just, you know, that type. Nobody calls a kid Donald any any longer, for obvious reasons. So if you take Donald Clark learning or you learning into the Google, you'll find my blog. Just yeah, you're a blog a lot. And I've got loads of YouTube videos and podcasts. I've been a great minds and learning. I did 25 podcasts on the history of learning one on one. You know, that topic of Hebs onwards. How I did one on that. But 25 of them, learning theories, my thing really, I've written extensively about it, you know, books and so on. So if anybody's interested, go to my blog and actually just email me. I'm going to put my email address in here right now for people. Thank you because I'm more than happy to speak. People, you know, it's all about just sharing things. So why not? There we are. I have a oatmeal address old school. Oh, you probably don't get any spam, right? Yeah, that's right. Yes, it's proof to be. Yeah, I've looked after it and nurtured it. You know, a few years time, Brian, you won't have to go and see your students. You know, you can just leave your avatar behind here. We can ask you some questions. You as a real person, go see the students. That's one. That's that's one future. Don't collect. Thank you so much for being with us. This is great. Have a have a great evening and I'm looking forward to the future with you. Thanks very much for the questions. They were great. Really good. Excellent. Take care. Thank you. And friends don't leave just yet. I gotta tell you where we're headed next, but I do want to echo Donald's question or his praise. These are fantastic questions and thank you for a great session. If you want to keep talking about this, you can hit us up on Twitter or Mastodon or on my blog using the hashtag FTE. If you'd like to go back into our previous sessions about pedagogy about AI, about reform and higher education, just go to tiny URL.com slash FTF archive. If you'd like to look ahead to our upcoming sessions, including some more on AI, just go to forum, the future of education that U.S. And if you'd like to read more of my thoughts on AI, just head to my sub stack AI and academia.substack.com. Again, I have to run friends, but I want to thank you all for a great conversation. Thank you for thinking together. I hope as the full semester approaches, or if it's already upon you, I hope you're all doing well. Take care and be safe. We'll see you next time online. Bye-bye.