 Hi, everyone. I'm Sonia Trivedi, communications manager at Moodo and today we are at Moodo Mood Global 2023. I have a really interesting guest with me, James Wiley, who is VP Product and Research at Listetech. This is a market research firm that tracks systems used in education. Hi, James. How are you doing today? I'm well, I'm well. Thank you very much for inviting me. Thank you very much for joining and I know you're joining from the US, right? Yes, I am. I flew here on Monday. Perfect, and it is your first Moodo Mood, right? My very first and I'm already impressed, so thank you. Perfect. Today you had a talk on artificial intelligence in education and the workplace. What now, what next? And I'm curious to know a little bit more about it. Starting with my first question, AI has been in education for 10 years. That's what I understood from your presentation. So why do you think the topic of AI exploded now, like walk us through that? Yeah, I think a couple of things. I think one is the anxiety around a artificial intelligence globally when it comes to bias and misinformation. The EU is doing a great job. The US is not. But you have groups that as a nation are trying to focus attention on whether we should have guardrails or some sort of ethical set of standards. That coupled with the explosion of chat GPT that happened at the end of last year. I think it's bringing artificial intelligence to the minds of everyone around the world and now we're focused on that. So the two things together, one, the kind of longer global move to put ethical safeguards around artificial intelligence and then the introduction of chat GPT in November of last year. Right, okay. And you mentioned a little bit more about the current state of AI in education. Can you tell us about it? Sure. Largely, there are pretty much five big areas that's being used and it's kind of, they are retention, all students dropping out, admissions, which students are right fit, advising, how do I support students, teaching and learning, that kind of information. And then there are some things like proctoring and things like that that's being used right now. All of those are primarily machine learning, which are just algorithms that help us predict. That's what they're doing. They don't involve chat GPT or anything like that right now. But I know teaching and learning is one where chat GPT is becoming more and more prevalent at least over the past few months, where technologists are thinking how do I help cause content development? How do I do this by using generative AI or chat GPT? Right. If you have to choose, according to you, which are the biggest use cases from those? The biggest use case in order, the biggest one right now is probably admissions, actually identifying the right students because institutions around the world, particularly in the U.S., but around the world are struggling to meet enrollment goals because students were away for a long time. They were away for COVID. And a lot of the revenue that students had coming in was lost because they weren't there. So they're now ramping up their efforts around targeting the right students. They're now targeting in a good way. I think retention would be next, next in terms of popularity, making sure I find the kids who are at risk of dropping out. How do I do that? Those are the two big ones of the five. Interesting. Okay. And as you pointed out, chat GPT has been a big topic now, right? How do you think technologies such as generative AI, especially chat GPT and similar tools will impact education and the workplace? I think it's going to impact hugely. But I think one caution for a lot of the technologists in this space is to make sure that we're focusing on chat GPT, generative AI as supplementing and not supplanting the effort. So we have to think about it as adding to the value of instruction, not replacing instructors. So instructors, if you take off of their plates the idea of, well, writing course content and doing that stuff, you take that off, they can focus on being more creative with their students, encouraging critical thinking, encouraging innovation of thought, encouraging those things because chat GPT can't do those. So I think it's going to impact in terms of taking a lot of the more manual tasks to have the plate of instructors and in the workplace like others, HR, human resource people, et cetera, so take that off the plate and should therefore free them up to attack the more higher level skills. And that's what I'm hoping will happen. But I think there's still a lot of anxiety around it where whether people are, their jobs are at jeopardy because if we see from everything else, we're worried about robotics taking our jobs, we're worried about other things taking our jobs. In this case, I don't think it's true. But I appreciate the anxiety. Right. I heard that Harvard launched the first robotic professor. That's right. Yeah. Yeah. So quite a lot of work has been done in Australia. That's right. And there's a lot of terror. I mean, you know, chat GPT is already doing very well at medical school exams and law school exams just on its own. I don't want a chat GPT to be my doctor. But it can be used. There was an interesting study just recently in the UK it's used for mental health. Right. Where I chat GPT only, but it can kind of look and say we can triage, we can preliminarily diagnose someone from suffering from depression or something like that to save time, save the time of the practitioners to say okay, once I had this triage now I can more deep dive and treat them effectively. So I'm hoping that happens in education. Right. Yeah. That's a good point. And I remember that you mentioned also about the student journeys. So if you can elaborate a little bit more on that, how AI will help improve student journeys. Yeah, sure. One misconception which is persistent for some reason is that the student journey is like this single line from enrollment to graduation. Yeah. And that might be true for some, but it's not true for most. Right. A lot of students stop, start, they'll change majors. They might leave. They might move to another university. This becomes more prevalent if the student is at risk, first generation students, for example, adult students, transfer students. So as more of a highway, as anyone else who's driven on a highway, you know, it's never going to be an easy route. Right. Yeah. So one of the things that AI could do is help me navigate that. If we think about ways, you know, the Maps app, we think about something like that that tells me, hey, there's a roadblock here. There's an accident here. You might want to try this. You save time if you do this. Because we see too many students, for example, graduating with too many credits and therefore too much money. We see students who are feeling the stress of being in school worried about debt, which is another big one. If a student from the outset, when she began her journey, can basically say, OK, I want to get there. I want to be a veterinarian. Help me navigate that path. And if I change my mind, what are the pros and cons of doing that? That's the kind of vision I think AI could help us deliver on and change, pretty much change how we support the student journey. Pretty interesting, right? It would be transformative. It would be transformative. Yeah. But also, I can't miss asking you about the fears related to that as well. Yeah. What do you think? There are fears. I mean, we hear a lot, you know, and all of us have some fears about perpetuating bias, right? Yeah. Because JGPT is only going to be as good as its inputs. So if its inputs are already biased or are filled with misinformation, it's only going to spit that out, right? It's not going to do anything else. So whether it's reinforcing that and giving it almost a stamp of approval, because a computer gave it to me, not a human. So there are some fears there. There is the fear that eventually we might run out of data and text to give it. So we have to give it its own text. And that might come into a lot of problems. And the third is really about how we can have these ethical standards. How can we look at what's inside? Because a lot of the major technologists will tell you they don't know what's inside of the language model, right? It's just there. We've trained it. Humans were involved in training it. But once it's up and running, I don't really know what it's doing. So it makes it really hard to evaluate one. It makes it really hard to feel comfortable if someone says, hey, I've got a language model. Here it is. It's really good. You have to just trust them. Because you don't know what's in it, and they can't really always tell you what's in it. So those are the three big risks. Absolutely. Do you think we are sort of close enough to managing these fears or somehow not allowing them to come? I think we're not close. Actually, I think because the creators of some of these models, including the creator of OpenAI, the founder himself said, we have to stop this. We have to kind of figure out a way to evaluate them, to protect it from buyers, protect them from misinformation, and put ethical safeguards around them. But right now, on their own, he's like, we can't do it. So we need government to help. We need policy groups to help us. We need a more concerted effort across the board for us to kind of begin to say, we can trust this thing. Because right now, if the trust goes, it's going to be very difficult for AI to keep moving and gaining support in universities and colleges. Absolutely. I agree with you. Lastly, I want to ask you, what do you think the future of AI for education is basically? Like, are you more optimistic, pessimistic? Optimistic, but I'm not a hundred percent optimistic. I'd say about 80%. I think the current worries, some of them will remain. Some of them will die down. The ones that will remain will be about bias and misinformation. They'll remain and understand the model itself. Others will die down about losing my jobs and so on. They will kind of die down a little bit. I think I'm optimistic that we might have this foundation model, as I called it, Stanford University called it, this idea that I have an AI kind of almost engine that's taking in different sources and analyzing them, interpreting them, and informing other tasks, like understanding students, understanding pathways, et cetera. I think that's going to be a paradigm shift instead of worrying about individual pieces of technology. I think this type of model is going to be key. And I think that's a short way away. I think we're close to having something like that available. So I'm optimistic about that, but within the context of the fears that I mentioned before. Okay. I would say let's remain optimistic. Definitely. And thank you very much for your time. Oh, thank you. Thank you. In Muramud Global, I wish you a really good time here and enjoy your stay. Thank you very much. Thank you.