 I christi, a fawr i'r gwelliadau yn preparad Local Biliron mewn ache DunblacióetLink yn yee hi'r newydd. Gydym y byd yn y gyblw llycioedd an awfulblycuts ac mae pawr yn salydo unidol iawn yn ei brin, rwy'n cwyl metre ar gyfer ag yr olybrai Gw orbitalio cynnesol yn y Gliriannauลi, Helyna Gwyt, dy'r euies iredigiaeth dwyllgr High School, Professor Judy Robertson, chair of digital learning at the University of Edinburgh. That is the first evidence session that we have had on this subject. Although the topic of AI has been raised by the committee in other sessions, when we have been discussing education reform, we were keen to hear a bit more today from this very fast-moving area. I firstly invite Olly Bray to make an opening remarks and then we can remove two questions from members. We just thought that it would be useful, and we did have a quick chat beforehand, just to make a few opening remarks. We know that it is quite unusual for this type of evidence session, but the reason that we thought that might be useful is that we think that the AI is misunderstood, and the use of AI in education is often misunderstood as well. One of the things that we come across quite often in our work is that we are all using the same words—for example, artificial intelligence—but we are all actually talking about completely different things. We are trying to get a common definition around some of these key topics that we think are quite important. I suppose that it is important for the committee to know that there are a few reasons for that. One of the reasons is that there is no universal definition for what we mean by artificial intelligence. In fact, the field is constantly being redefined, where some tools have been defined as AI and have now been declassified. I will mention what Scotland's definition of AI is in a minute, just for complete clarity. The other reason why we think that there is sometimes confusion about AI in education is because of the influence of science fiction, where we see human-eyed type robots walking around and we assume that they can do these types of things, and sometimes those behaviours, I suppose, are overemphasised by the media. The third reason, perhaps the most misunderstood, but probably the most important, is that what is easy for us as humans is actually very, very hard for artificial intelligence. What seems hard for us in humans can actually quite often be very, very easy for artificial intelligence. I will give you a quick example of that, if I may convene. Some of you will remember that back in 1997, the world chess champion at the time was beaten by IBM's system. More recently, in 2017, Google's Deep Mind beat Go, the game where you move stones around. Of course, what is interesting in that, although both of those games are incredibly complex to play, it is really just a series of algorithms. But what is interesting about it, of course, is that the technology at that time was not available actually for the artificial intelligence to pick up the chess pieces and move them around. So what is easy for us in terms of balance, movement into personal skills is very, very difficult for artificial intelligence. What is easy to compute and run through algorithms is easier for machines, and that is one of the things that is most often understood. Just to provide complete clarity around the definitions, I suppose, of artificial intelligence. In Scotland, we define it as technologies used to allow computers to perform tasks that would otherwise require human intelligence, such as visual perception, speech recognition and language translation. I suppose it is also important for the committee to know that there are several sub-fields of artificial intelligence, and some of those names are used interchangeably. You will have heard things maybe such as deep learning, neural networks, machine learning, symbolic logic, knowledge graphs. All of those are sub-themes of artificial intelligence. We are not going to get into the details of this today, but what we would say is that, when we get into the discussion, there will be an opportunity for us to talk, I imagine, around generative AI tools that have been produced for commercial reasons or for social reasons that are now being applying for education, but also hopefully for us to get into the debate of some specific tools for education that use generative AI and other forms of education that are built for education. There are, again, two separate things there. We will be very happy to discuss the ethics, the professional learning that is required here or, indeed, learning more general. I hope that that is just a useful introductory statement to set the scene, convener. Thank you, Ollie. There is obviously a lot of terms in there that bring fear to me, because I am not understanding the sphere at all, so I am hoping that at the end of the session everything will be a bit more calm in my head anyway. You spoke already about generative AI there, so we are wondering how that affects how teachers and educators assess learning and understanding, particularly the outputs of that unsupervised study, because that is one of the fears, I suppose, that everyone has. Does that have implications on the certification practices and policy? That is, I suppose, cheating, isn't it, picking that one up to start with that. I do not know who wants to go first on that, does anyone? Ollie? Your light was on, so that was fine. No, I am happy to go first on this, and I am sure that others will come in with opinions as well. I suppose, first of all, it begs the question, what is cheating when we use generative AI? That is, I think, an important question for us to consider. So, if I am writing an essay and I ask generative AI or indeed a Google search, which is based on an AI algorithm as well, for a clarification question, is that cheating? If I write a paragraph and I ask chatGBT to rewrite that paragraph for me and to check my spelling and punctuation or to rephrase it, is that cheating? Or if I ask, with a simple prompt to ask chatGBT to submit the whole of the essay, claim it as my own and hand it in, is that cheating? I suppose the point that I am trying to make there, convener, is that there is a bit of a spectrum of practice around some of these things, so it is more about thinking about how do we use technology tools, including tools like generative AI, to help us to improve the learning experience to make that work. Judy, do you want to? Yeah, so I think that maybe starting from thinking about AI and cheating is maybe a slight distraction, because I think that particularly generative AI is going to transform education so much that we should be thinking about what is it that we need people to learn about, what is it that our educated citizens in 10 years and five years are going to need to know, and maybe the kind of things that people need to know can no longer be assessed by the sorts of exams that we currently have. So, I think that all these spectrums there is quite useful, because I think that it shows us that, I think that most of us would say that misrepresenting your own work, which was actually some work that was done by somebody else as your own work, is cheating by most standards, but those great areas in between, I think, will just be how we change what things humans do and what things that machines do for us in education as well. Hello everybody, I'm Helina. I am a daydream believer and nothing to do with the monkeys, but everything to do about basically taking the what and the why and creating a how. We're a not-for-profit organisation who currently are working with 35 high schools across Scotland on the creative thinking qualification, so that's the equivalent of a NAP5 and a higher level 5 and level 6 SCQF. The qualification is credit rated by Edinburgh at Napier University and next year we anticipate that that number will double and there will be over 75 high schools working on a creative thinking qualification. The reason I'm speaking in the context of this around assessment, the qualification moves us away from acquisition to application, so there's no formal exam. Our young people work through a process and there's the process that is recognised in that and very much our conversation at the minute around gen AI is around that question, what if we get it right? So I think, you know, Judy and Ollie are right, it's important not to really fixate on what could go wrong, but what if we get it right? And if we break that down in terms of the work that we're doing, the what ifs are the questions and they're daily questions as this moves at a speed that none of us anticipated. The we is a really important part of that collaborative movement, so we're working with industries, particularly the creative industries, who this is having a massive impact on, and then empowering our teachers and our young people to be part of the conversation around right, what feels fundamentally right in terms of this experience. So I can tell you a little bit more about the qualification, but that might come out later on in our discussion, Helena. So Chris, in terms of that unsupervised study and the outputs that come from that as a teacher actively working in schools right now, what are your thoughts around that? Yeah, so I've kind of mixed feelings about that. Within the current kind of ecosystem of how school works, there's kind of risks that I'm aware of. If we are able to kind of shift the way that school works, then AI will work really well with, you know, looking at the Muir report and the Haywood review, the things that came out of that, AI could really enhance those things from them. I think the issue that we've got at the moment is that we're trying to fit a tool that is much more than a tool. It's an expansive paradigm shift. That's what it represents, and I don't think I'm overstating it when I say that. So I would agree, you know, I've got a list of four fears, four things that I think we should be concerned about in our current ecosystem, bringing AI into that, but cheating is not specifically one of them. I think that that will kind of iron itself out over time in terms of we will have to adjust how we assess, and I think that's helpful as well because it does address the issue that I think Scottish education does have with the examination approach. You know, I've been thinking a lot about goodhearts law recently when a metric becomes the goal, it becomes a pure metric, and this idea that, you know, every people I teach in my higher physics class, what do they want? They want an A, so they can go on to the next step of education rather than, well, let's be expansive in our thinking, let's actually explore some of these topics, and that's where AI can come in and really work with us in that respect, but is there time and is there the capacity to do that within schools and the current system? That's where I'm a little less sure. That kind of takes me to my second question, was around if AI can produce work of a similar quality for the learner, if they could, you know, does that then limit their intellectual curiosity in terms of their ability to sort of desire to learn skills and what the implications of that might be, so have you considered that at all, Helena? Again, I think that, you know, what I can bring is the context of this, so we're working with teachers who are delivering the creative thinking qualification. We have said to our teachers, we want you to use AI where you think it's appropriate, gen AI, so what we're seeing in terms of how that works in the classroom, when a young person is researching the idea, sometimes they have used AI to come up with some directions and ways that they can perhaps create a survey to find out feedback on it, but where it becomes really interesting is particularly around creative thinking, for some reason, many of us rule ourselves out as creatives. Somewhere along the line in education, someone's told you, because you can't draw a bowl of fruit, you're not artistic, you are not creative, and so we have a whole sway of young people who don't see that as a skill, and if the World Economic Forum is predicting that as one of the skills of the future, we all need to see ourselves as creative. What happens in the qualification is sometimes the distance between an idea and actually creating and seeing that come to life can seem quite vast for people, because they feel they can't draw, they feel they can't... We're seeing our young people putting prompts in. For example, one of the challenges around is to create a wellbeing space in your school and activate it, and a young person, one of the learners, had an idea about origami and getting people to come in and make something in the space, so she could make one origami model, but to create that and make that and look like it was in the space would have been a huge amount of time, so her teacher worked with her to put a prompt in to Bing, one of the gen AI, and within seconds she had visuals that she could then take away, begin to analyse, talk about and then move back and forward, and it's this ability to use it, filter it, look at it with that context of is this right, is this what I think I need to move, and then go back and trust their own human intelligence and their own operating systems. I have some examples of how my university students have been using generative AI in my course, which goes to the point about what sorts of skills they're using it for. Using chat GPT is a brainstorming partner, using it to help them to work out how to tailor text for different audiences, summarising articles to help them understand things that were difficult for them at first, trying to get an idea of what the literature on a topic looks like, using generated images to illustrate their teaching materials, so they're computer science students, not artists, so they're fine if they want to use Bing, as far as I'm concerned, and sometimes computer science specific tasks. I'm really pleased with how not only the students have started to use these tools to extend what they're capable of, but also how critical they have been in working out whether and how their performance is better when they're using AI than if they were just using their normal brain, I suppose. They're aware of where their brain starts and ends and where the AI can extend it. I think that that's an example of where we're going to end up. Obviously, that's going to look quite different from my group of learners than from Helena's or from Chrissy's, and I think that that's going to be part of an educator's job is to work out for this kind of age and stage of person and the tasks they're working on. What are we expecting AI to do with them? Before I move to Liam Kerr, is there anyone else? Just one thing to add, convener. The first time I was in this room, I was here as a geography teacher looking at technology and education many years ago, and the reason that I bring up that story is because, as a geography teacher at the time, you knew when it was the pupil's own work or not, because, of course, learning is a process. It's not an output around that. If you get something that's handed in because you know pretty much all of the young people in that class and you've been with them on that journey, particularly over the period of the year or the two years sometimes of that course, you've got a rough idea about where all of those different young people are around that. Again, it's just thinking about how do we assess and how do we measure that process and work with young people to be able to develop those skills rather than to Chrissy's point of focus so much on that output? I do think alongside that we will need to change our assessment the way that we do assessments, because even before the summer I had pupils telling me, yes, I've been writing essays for English using just, I typed the question into chat GPT, it spat something out, handed it in and I got it marked and it was fine. I had numbers of pupils telling me that, quite a few. The question is, well, in our current system yet that's cheating and certainly there's not been any decent learning going on there and perhaps that's the opportunity to bring in things like virus like the university, questioning techniques to see how the pupils are getting on because even thinking about at the moment, if I've got a group of pupils and one of them decides to start using this new technology, I will notice, for various reasons, I mean, as a physics teacher I'll probably notice because actually one of the dangerous aspects of the current iteration of this technology is that it can give you answers for various maths and physics questions, but it will also hallucinate, it will come up with imaginary answers and worse than that it will actually explain how it got to this imaginary answer. So that's worse than just giving me, you know, if I had a random answer machine that told me, you know, and I knew it was 50-50 whether it was giving me a right answer, I can work with that, but if I've got a machine that fully explains and it looks really credible how it came to a wrong answer, which I've seen hundreds of times, or just picking something randomly off the internet, that's easy to spot and similarly, you know, there'll be things in different subjects where you can spot that. Issues will arise however when pupils enter into a new year with a new teacher and let's say at the start of the year they build their own chatbot, which is actually, it sounds complicated, it's a very simple task and it will become more simple, there'll be apps for it, there'll be things coming out in the next six months to, you know, 12 months where, you know, I built a chatbot, it took me about 20 minutes, I'm not a tech guru, right, I don't have an IT degree or anything like that, but I just went through a simple process where I made a chatbot that can mark its first year science investigations. I could have also made a chatbot that represents the understanding of a 15-year-old in Scottish education and how they, and if I make that at the start of the year and I use that consistently throughout the year, well then it's going to be more tricky to spot that pupil, so I think there's going to be, I think, as always, you know, bad actors, there's always going to be bad actors and there's always going to be people trying to get around the system, but it's about coming up with new assessment tools that are aware of the full picture of things and just how much nuance there is in this new landscape. Okay, Liam Kerr, please. Thank you, convener. Good morning, panel. Chris Ransom, on exactly that point, but taking it from the other end, presumably generative AI reflects its source material, so how do you, as educators, and Professor Robertson might wish to come in after this, but how will educators, as a general category, ensure that learners understand that that source material might be skewed and therefore to treat the outputs of it with sufficient caution? Yes, so that was one of the, so I've run kind of four focus sessions within my school, Dumbly High School, and the very first one was actually on this topic about reliability, not being able to trust it, and what I call the kind of the black box of large language models, so each of these tools is built on a dataset. Some of them, few of them are open source, so technically people can look and see where is it getting its data from, but many of them are a bit of a black box, and I would want more technical expertise to speak about that, but just to kind of, even at that point there, I'm currently at a stage where I don't feel comfortable recommending pupil use of generative AI in the classroom. Certainly, I think there are actually age restrictions imposed by the companies on these tools at the moment, so I've not been saying to teachers, look, get the pupils to do this, that or the other, or to come up with new ways of doing assessment using generative AI specifically for pupil use. I've not said anything to do with that. Certainly, I've kind of rambled a bit and I've forgotten your point now, your question. Questioning the source material. Yes, so the source material, there are tools, so I did meet Ken Muir last time here, and he's been working with an organisation, a company that is looking to kind of mitigate the dangers that come up from hallucinating, from picking things out of the bad source material, but I think in terms of currently, I guess I'll just leave it there, currently I don't feel comfortable, I don't trust the source material, and I'd probably pass that over to Judy here. Yes, so that's a great question, because absolutely people should not trust what generative AI comes up with, because it is inherently unreliable and that is a feature of the technology which is used behind it, so it's statistically based and it's not transparent, so you can't ever rely that it's going to produce something which is true all of the time, which is exactly why we need to teach AI literacy in schools, so that people are very aware of the limitations of AI and why they shouldn't trust it. Now it's possible that there will be technology advances and AI will become more reliable, but there's also human bad actors who may be deliberately using AI to generate misinformation, and I think we all know the dangers of misinformation and disinformation and why our learners need to be aware of these things and have strategies against them, so I think in a way that's where education is moving towards to give the learners the critical skills to always just be thinking, is this information true? How can I tell? In terms of the creative thinking qualification, the fifth learning outcome is evaluation, so in that part we work with our young people to look at the work that they've produced. We ask them to communicate their product in terms of tell us a story, make us care, and very often the evaluation is it will enable you to tell a story, but it will be your human intelligence, your empathy skills that will enable us to care, and that's a really important part of the experience for young people to look back on it and say, you know, if you put the same things in, you're going to get the same things out, but it's your human context, your human intelligence, your understanding of your own operating system that will enable you to communicate with empathy and context. I'm very grateful to you all. Ollie Bray, I'll move to you now, obviously answer that question as part of this if you wish to. Something slightly different, clearly the ability to use and put into practice the skills that the others have talked about will be impacted by access to IT, and therefore is there not a risk of a digital divide where certain groups will have access, certain groups will have practice, and others don't? If I'm right on that, how do we ensure that the use of AI doesn't, perhaps inadvertently, widen education gaps? Two great questions, but I'll pick the first one up first, but I'll be very, very brief, and then I'll pick up the question around devices. Judy described one of the solutions around AI literacy. You know, you could call that critical literacy, we could call it data literacy, we could call it internet literacy, we've had different paradigms, you know, of these types of literacies over time, and I think that we need to double down on that, because as the tools get more and more sophisticated, the misinformation gets greater, and in fact the misinformation adds together, and that causes difficulties and confusion, particularly for children and young people. There is a bit around, as the famous games designer would say, the solutions in the problem, so if a technology is the problem, the solution might be there as well, so a really, really interesting exercise that we try to encourage with the teachers that we work with is that you create a piece of generative AI and then the young people analyse that and look to see where that misinformation is using a variety of different sources just to do that, so it's not just this isn't inaccurate or this is accurate, but actually get them to critically examine that, in the same way that we would do that with a newspaper article in the past or a web or a web page that's just been generated. There is a slight twist to this, I suppose, and this may come up in questions later on, and that is, I suppose, that we have to be very, very wary of very, very young children who are growing up in an AI-driven world. They're speaking to SPART speakers like Alexa and Google Home all of the time, and we know that they start to build up trust early around AI, so what they've not got is they've not got the wisdom to apply that critical literacy to start with, so again we need to think about our programmes taking that forward, and I think that's a really important point. And then to your question Mr Kerr, you're absolutely right, is that without any technology, if you want to teach responsible use, you need the technology to do that, because getting kids to imagine the technology and telling them not to do it or to do it isn't an authentic learning experience. It reminds me back in the early days when we were running into that safety and responsible use programmes, and we used to do a lot of work with children and young people around privacy settings, but of course social media was blocked in schools, so you've got teachers standing up in front of classes describing or showing screenshots around privacy settings, which is completely abstract to the children in terms of making that work, so you do need the technology to be able to get young people to be able to understand the tools. Mr Kerr, I think that you're right about the digital divide around that. We see big differences in technology and the role of technology in education, and there are a variety of different reasons for that, and some of those reasons are complex, but if you want equity, we need to try and level that technology playing field, I think. Is there anyone else who wants to come in? If not, you can leave now. Good morning, everybody. First of all, I've got a couple of areas I want to look at, but I want to ask quite an open sort of exploring question. To what extent is the requirement for skills to be able to use AI effectively, and you've started to allude some of that at different ages and stages of learning, and to what extent is it about knowledge? My own personal view is that skills, acquisitive skills, curiosity will be utterly vital, because it's only from the curiosity that people will even be able to learn the skills to interrogate and question, but you're the experts and I'd appreciate, first of all, your views, Ollymax and then Judy, perhaps, but I know you'll all want to come in. I agree with you pretty much, but I think that this is about skills, I think that the curiosity is incredibly important within here as well. It's also about knowledge and we couldn't separate the two. My worry is that when we think about AI education is that we think about the wrong types of knowledge. We become perhaps sometimes fixated around how the AI system works and the technicality part of that. That's not to say that that's not important, but sometimes young people don't need all of the detail. What young people do need is the knowledge of ethics and the knowledge of what's right and the knowledge of what is appropriate within society. I don't think it's as simple as knowledge and skills. I think you probably knew that anyway, so curiosity is important, but it's making sure that we've got the right types of knowledge that young people are engaging with. Before I bring in Judy, I'm aware that our Scottish Institute is underpinned by ethics and it makes it a key part of our framework, although one can then argue who's ethics is an entirely different discussion. I absolutely agree about curiosity as being very important because I think that that's something that we should value within the human education system. I kind of agree with Olly and kind of not. Skills are important, knowledge is also important. Where I disagree is that I think that you need to know enough about how some of the AI works to be able to really understand the ethics of it. Part of the reason I say that is that I'm a computer scientist and I think that everyone should know those things, but more seriously, we've done some research recently asking children about what they understand about artificial intelligence through their interactions with Alexa. Because Alexa seems like a person, it's designed to be like a person, to give you answers. The children tend to over-attribute intelligence, but they're worried about trust issues and so on, but because they don't have a technical understanding of it, some of their fears are misplaced and yet there are trust issues that they should be worried about, which they're simply not aware of. So I think that we need to work out what is an age-appropriate level of understanding about how AI works for the different ages and stages and what's the best way to put this across. So that's the knowledge which is important, but it needs to keep updating and because of that teacher's knowledge it needs to keep updating. I know that Helena and Chris will want to come in as well, but on that point about age-appropriate use of particularly generative AI, I'd appreciate your thoughts about that of key roles and I think we touched on earlier that the applications themselves have put some controls in place, but I'm interested in how we can enable people at different youngsters at different ages and stages to be able to develop some of that thinking and trust, I think how trust develops in young people is critical. So I'd appreciate your thoughts on that and then I'll bring in everyone else. Yes, so I think that this is a huge area because of child protection. The limits for a lot of the tools are 13 years old and that's mostly because of US legislation. I think particularly generative AI, which generates images, there is a real danger there and that's one of the reasons why I was pleased to hear what Chris said about that. I don't think that children should be using these things on the supervisors. Certainly you could do it with a teacher, but I think that the child protection risk with images are quite large there. I'm not sure what the technical solution for that is. I think that it's good for children to have practice in a guided way in the same kind of reason that Ollie was saying is that we need to help and guide children through this in an age appropriate way in the same way as you teach children how to cross roads. We can't stop them crossing busy roads. We need to teach them how to do it and it's the same with these sorts of things, but there's a role for regulation there and I think it's maybe beyond the remit of this particular committee but it's something that the Government needs to look at. Helena and Chris, would you like to come in on those points? When we look at where the world economic forum we're predicting in terms of future scales, 10 years ago they were saying critical thinking, problem solving, creativity, because it was on the landscape, we could see this coming. I think what happens with those scales is that they are uniquely human and that's in terms of us developing those, also our young people recognising those. The knowledge part assumes that a teacher is a sage on the stage and really what we've been asked to do, particularly with lots of stuff around project based learning, particularly where AI can help and personalise learning. We're asking our teachers to move to the coach, a coach and a mentor and very much that's the work that we're seeing in the creative thinking qualification where there isn't a fixed outcome, the young people are curious so they're set a challenge, it's their curiosity that drives their desire to find out more about the ecosystem of a forest so that they can build a theme park connected to it. When you talk to any young people, if any of us have teenagers around here and you say to talk to them about knowledge, they'll hold up their phone and say, I have it all here, it's all there, mum, I know it. That's a challenge. Then you have to say, how do we excite you about education? How do we give you something that makes you interested, curious to go speak and find and come up with a solution? That's where AI has, there is an opportunity in that to support that. It's a fundamental shift that's going to have to happen but there are ways of doing that. Chris Whart-Hillan is describing there, does that not play into the point that you made earlier about assessment and measurement and how things are going to radically shift? I would 100% agree with what's being said and add to that that there is a danger depending on how we approach this. It's an opportunity and if we approach it in the wrong way, we may get a tail wag in the dog situation where we're preparing young people in schools for an AI world. Perhaps there's something to be said about rekindling the love of mind expansion, knowledge, fulfilment and this is actually an opportunity to do that. In the briefing notes and various articles that I've been reading, it has been focusing on the concern about jobs and fitting into a world of AI. The dystopian model is making children cogs into an AI machine rather than AI fitting to us as humans. AI technology is, because of some of the aspects and the way it's been trained that GD would be able to talk about, it has been moulded to the way that we think in some respect and the way that we talk, like natural language processing and things like that. We have an opportunity to go down that route. As I said earlier, I wouldn't want to make too much of a focus about the fears and concerns of AI, but just because, speaking about bad actors and things like that, in our current situation, at the GD meeting that I was at last time, one of the young people there said, wouldn't it be great if we had just why you're in the classroom AI sitting here, and you could chat to it and say, oh, I don't have time to ask the teacher. It may be that there's a space for that at some point, but because of what is currently possible with the technology, again, bouncing off what's been said before, it's not something that I would want necessarily to bring into the classroom in that sort of respect. I did a session at the school on negative use cases to try to prepare teachers for worst-case scenario what could people do with that. I'm not trying to be alarmist when I say that, but we have some basic prompting, so the way that you interact with the AI tool is that you write a prompt. Depending on how you word that, you can manipulate the AI to disregard its safeguarding procedures currently. For example, I was able to get AI, the Microsoft and the Google ones, the mainstream ones that are available. I could give it a list of chemicals from the chemistry department and it could tell me how to make a weapon, and it did that. I got it to tell me, you know, jokes, inappropriate jokes, all those kind of things that you kind of don't want pupils engaging with. That is currently possible without any real effort. Also, the issue of not to sidetrack things here, but mobile phones in schools in Spain, there was an issue with pupils taking photos of boys taking photos of girls in the school. There's various apps, various softwares that will remove somebody's clothing from a photo, realistically. I feel like I'm walking a tightrope here. I want to present this as if we adjust the way that we do education, there's massive opportunities, but just kind of slotting it into what we do at the moment, there's massive threats, or not massive threats, but there's significant threats that we should be aware of. I agree with the unit that you've quite neatly led on to my final question. How on earth do we begin to tackle the challenge? I suppose that I'm very mindful of us in here's parliamentarians to help support the education sector to keep up with the pace of change, which is startling and almost unfathomable at this point. Judy, if you could come in first, but I'd appreciate everyone's comments on this last question. I spend quite a lot of time thinking about this, so teacher professional learning is really, really important. Particularly for this, it's not a one-shot thing, it's not like when you learn how to use teams during the pandemic, this is on-going, it keeps changing all the time. I think that we need forms of teacher professional learning, which I suppose that they really respect practitioners' ability to be creative and innovate pedagogy, because they're going to have to keep doing that as the technology keeps changing. But the teachers need support, I think, and it needs to be quite regular. One way that we've been looking at this for data literacy, which is kind of related, is having knowledge-creating communities where we've got university staff and teachers across local authorities working together. They come to the university, they learn about some activities they could do, and then they go away and try it with their classroom and come back for professional reflection with each other. What's great about that is that I've learned so much from what the teachers have tried out and they've shared with each other. They seem to also, particularly in primary schools, quite naturally share it with their colleagues, and not necessarily just colleagues in the school but across clusters as well. I think that there's something really important there about the respect and the equality between the teachers and the university and whoever else is involved, and also giving teachers enough time to be able to engage with it. We know that that's always a problem with an education from time for professional learning. I think that there are models for doing this, and I think that we can get there, but it's going to be quite a lot of work. Yeah, I think that the first thing is that we can't do this alone, and we're not on our own. Every industry at the minute is trying to grapple with this in terms of their future workforce, the implications of that. I think that more than ever, this is a call for education to be as collaborative as it possibly can be. We're working at the minute with the NHS, we're working at the minute with some of the creative industries to look at the implications of AI in the work that they're doing and to learn from those challenges and questions that are coming from all of that. In the work that we're doing with Education Scotland, with their digital skills team, we're looking at creating a landscape, that digital landscape is just a simple kind of look of a web page that would enable us to keep maintaining contact and updating that and offering opportunities for employers across the different industries to share where they are at the minute and the insights that they're gathering. I think that sometimes what happens is that we go into our own little swim lane and we think that it's our unique problem. It very much isn't, as we all know. Last comments from Ollie and Chris before we move on. I'm happy to come in first, just on around the age appropriate, if I may, and it's just something that the committee might be interested in. There's some interesting work coming out from the US at the moment from an organisation called Common Sense that works internationally that are looking at matching AI apps to appropriate age and stage. It's very, very similar, like we do in the video games industry, with the PEGI ratings around that. I think that's an interesting area to explore and an interesting area to keep an eye on. I suppose just linked into that last question as well is that teachers, I think, are always in a very, very tricky situation with this because we know that there are some things in life that are at certain age-appropriate things, like a video game or a film or a legislation to do with artificial intelligence. Therefore, we know that children should not become exposed to these technologies, but frankly they do. Teachers are therefore in a very difficult situation in terms of how to manage that and how to do that. I think a lot of this comes down to actually speaking to children about this and we've been through several paradigm shifts around this, speaking to children around content they've seen on the television. We know that a lot of teachers now speak to children about content they've played in video games. That didn't happen 10 years ago because that paradigm has catched up, but now we also need to get into that mindset about speaking to children about content that they've created through these types of tools as well. Again, these are simple things that we do because one thing that we know that teachers do really well in Scotland is knowing their learners and speaking to children about it, but it's making them the link between these different technologies. I suppose that I would absolutely support in terms of what Judy and Helena have said about the importance of teacher professional development. It's about making time for that. Folks, I'm just going to be really frank, is that we have to use curriculum reform as a springboard to try and get some of this important stuff into the curriculum. When we developed curriculum for excellence a number of years ago, there are things that are sometimes mentioned that young people do need to know, which I think are actually probably more important now than ever before. Now, the use of technology is one, and AI would come into that bucket. Creativity is one, but that's even more important than ever before. All the evidence points towards that. Learning for sustainability is something that has been in the curriculum, but we need to double down on that. Of course, we've got things like the qualities that have been mentioned, but we know that, again, we need to continue to get better at that and things like UNCRC as well. When we think about curriculum reform and curriculum education in Scotland, we need to make sure that we've got those cross-cutting themes in there with appropriate professional learning for teachers to make sure that they remain upskilled in those really, really important issues for society going forward so that they can support children and young people. So, when I approached the leadership team at Dublin High School to their credit, they gave me free rein with how to approach this problem because it is something that's kind of outwith a lot of people's understanding and area of expertise. The approach that I came to with the school was to try and move everyone through a four-stage process, so awareness, understanding, utilisation and synthesis, i.e. synthesising that into policy on the local level, obviously, within the school. What I found during that process—now, I've built a website, I'm a genitive AI for educators, that follows that four-stage thing, and so I'm slowly populating that with things to help primarily my school, but whoever else might be helped with it—what I found with that is that focusing mostly on the earliest—you can get a lot done by focusing on awareness and understanding. I believe that the utilisation of this technology hasn't come to fruition yet. I think that there's still massive amounts of work to do to make that something that will be useful in schools, but, in the meantime, to deal with the immediate threats and to grasp the immediate opportunities, just making teachers aware is a massive challenge and a really important one, and also changing the mindset away from—I've got surveys here that I did with the staff at the start of the year—how do they feel about it? It actually reflects pupils as well, similar mindsets and a mixture of caution and curiosity. My role, as I see it, is trying to shift people away from, yes, keep the curiosity brilliant, and I actually try to, well, how can we use this positively, and I think the more that teachers see what is possible with AI, we'll make it so attractive that when the CPD sessions come up and when the training comes up, it's not like, oh, I've got to learn another tool, I've got to learn Microsoft or Google or whatever. I don't—teachers have minimal time for that, and the appetite for learning a new software, a new tool is, I would guess, at all times low since the pandemic, but if what they see is something that's truly expansive that can truly change the way that we do education, then it will be so attractive that they will want to use it, and that's my intention and approach that I've been taking is to make it look like that. That's a great leeway as we move to a much more positive approach to AI, so with Stephanie Callaghan taking the lead, thanks. Thanks very much, convener, and thanks for being here today. It's been incredibly interesting so far, and I'm sure we'll have many, many, many more sessions in this as time moves on. I might stay with what you were talking about there, Chris, and switch my questions around a little bit. Have you been thinking about how AI might help to reduce the workload in teachers? So I'm thinking along the lines of lesson planning, monitoring people progress, et cetera, there as well. I'm happy to submit this. My most recent session, 22 November, was all about positive use cases of AI. After trying to go through this process quite clearly, making the group of teachers aware, understanding of what's going on underneath the bonnet of the technology, and then jumping into, well, right, now we understand how we can use it. There's massive opportunities for this in various ways. There's admin opportunities, so again, if you know to be cautious, there's admin opportunities. As I said in this session here, and I don't want to go through the whole thing, but I submitted some scores, anonymised scores, test scores, and I asked one of those tools to come up with some suggestions. The first thing that's helpful to do when you do that is to ask the tool, do you understand what I've given you? You can upload an image or upload a document and ask it, well, do you understand what you're looking at? Then it gave me three incorrect statements about what it was looking at. There's a word of caution that you can't just jump in into the deep end and, again, it's a big thing about trust. You can't just trust it when you're doing these things, but, for example, it depends whether people would consider this. Again, it comes back to ethics like we've talked about. It will write pupil reports, so you could give it some very brief prompt, some ideas about pupil. You could even have a table of all your pupils with comments for each pupil, and perhaps there are scores going along it, and I've done this. Submit that anonymously again, because I've been stressing the GDPR concerns about this. You could submit that and get it to write a report based on just an Excel sheet of data. Obviously, you need to read it, and there's issues with hallucinations and coming up with things that just don't exist, but I could go on and on about what you could do with it, lesson planning. I submitted the national five learning outcomes for PE and the national five learning outcomes for geography, and I said, right, I want to do, let's get out of our silos, cross-curricular, again, massive opportunity here for cross-curricular education. I just picked two random things. Geography and PE, let's look at this little section here, and then it produced a lesson plan about learning about the world around us through the sports that people play in different parts of the world. It was fascinating for me. Obviously, I'm not a geography teacher or PE teacher, so maybe that's not a unique thought, and someone's done that, but it's so easy to come up with new and exciting ways to do education using this tool. I'll leave it at that. Fantastic. I will bring the others in there as well. It's just so interesting as well how it can spark that interest that makes you think beyond what you were going to imagine initially. I'd be really interested to hear what the others have to say on that. Is there a thing about with teachers as well about getting the ball rolling initially, and lots of that can develop naturally as AI develops and just being part of life that all comes together? I think that Chris is summed up really nicely, so I'll just pass over. Thanks. On the last point, there's no doubt about it, is that AI tools will naturally start to fall into life. We've mentioned smart speakers and things this morning, but I'm sure pretty much everybody here has already typed an email this morning, and maybe as you were typing an email, maybe you pressed tab because it would suggest to the next words for you as you were going along that. That's AI doing that, and we see that in the systems that are already there already in terms of prompts. There are some things where the technology will naturally drip feed in. Chris's examples are really interesting around that. What we need to be careful of is that we don't enslave new technologies for old ways of working. The question would be, is it still appropriate to send a written report home once a year for parents where we've got the technology to support real-time reporting around that, and of course the AI can support us in doing that. So again, how do we behind the creativity of these things to make these things work? We actually do a similar thing in the programme that we've been developing, that we're going to be piloting in the new year for head teachers, where actually we use generative AI to help people work through the process of a risk assessment for a school trip. To be absolutely clear, this isn't writing the risk assessment, but it's getting the AI to come up with different prompts in terms of the venue that might be there to get people sort of thinking about it, and then to generate the letter that would go with that as well. We just use that as an example. Again, it's not about replacing the things, but it is part of it. The other bit that I would say, and I said this recently in a podcast as well, is that I do worry with some of the AI tools that we could end up in a situation where we get given a document, we ask the technology to summarise the document, we pass that summary on to somebody else, that person doesn't read the summary, they analyse the summary as a result of that, and actually within that we lose the point of it, we lose the nuance of it. Now, I think that we could probably all relate that to that a little bit, because there's lots of documents that come out at the moment that sometimes are sort of summarised for us. It's probably just the moment of person that's doing the summary rather than the technology in doing that, but of course if we're trying to move things forward, it's important that we get into the nuance and actually understand the background of that, but huge potential to not just reduce teacher workload, but to change the way that we work, and I think that's the important thing, but we've got to be imaginative around that. Again, as I said, we're not the only ones who are looking at this, you can imagine within the NHS that the amount of admin being able to reduce that is incredibly attractive. I think that there's a real opportunity for us to use AI, to encourage our teachers to go back to what many of them came into the profession to do, and it was about the joy of learning, not about ticking boxes, so I think that yes, it should and hopefully will help in terms of those admin tasks. I think that also coming back to that opportunity around personalised learning, there's a huge amount of data that exists within schools around learners. I think that there's an opportunity to use that data-driven insights to create personalised learning experiences whether that is just within a classroom or generally within looking at the ethos and the aims of the overall school and the context of the learners that they have. Finally, just to come back to your point about teachers learning about this, we're all learners at the minute, and with that comes a huge vulnerability. We submitted in the appendix a short film that we ran as a CPD there with Glasgow School of Art. We brought together teachers from across the country and placed them alongside experts, if that's the word, within the creative industries and AI. I sat beside a teacher, a design and technology teacher who said, first of all, I recognise what it feels like to be a learner for the first time in a very long time and I'm absolutely terrified. There's a danger and what happened to her was we become immobilised and we know what that feels like when we don't know the answers, we can decide to remove ourselves. When you take up that call to action, which is what she did, and today with the support of a mentor, she learned techniques in order to produce a campaign to stand up to tell a story. When we all move into that real ethos of we're learners, we can move from being immobilised to actually taking up the call to action to make this and get this right. Thank you, that's really interesting. I'll come to you in just a little second there, but my next question is about how can AI support individualised learning. I'm really interested in that and the fact that it's almost like children and teachers learning together as well and learning from each other and actually valuing that and teachers becoming perhaps more facilitators and having that oversight that's about keeping things respectful and keeping the ethics on board, keeping things in turn. I was actually just going to pick up on a personalised learning point from Helena there. So far, we've been talking mostly about generative AI, general purpose tools, which anyone can use. There are also lots of commercial AI systems that are purpose built to be tutoring systems, for example, in the education space. Their claims and their promises would be its personalised learning. Every learner can learn at their own pace. It's a small adapt to how they're getting on. It will adapt if they need extra hints or more difficult or more easy questions. That all sounds like it would be great, but there are quite a lot of, I don't know if I want to say dangers, but there are issues that we need to consider there, and one of them is about privacy. In this country, we respect children's rights, particularly the right to privacy. I think that there's a problem with the idea of collecting data on children's learning, which could be shared with teachers or parents in the form of reports, for example. We need to make sure that we get children's consent and participation in that, about what sort of data is shared. There's some promise from personalised learning, but there's also issues about how the learners feel about it. I think that the learners should definitely have autonomy to work with the machine and to work with the teacher, maybe in the role of teachers' facilitators, as you mentioned. However, it's the humans in the room that should be using the system that the learning shouldn't be driven by the machine, and that's one of the things that I'm concerned about. If I come back to it, AI is a tool, and our human intelligence should always override that and should be celebrated and developed. When you look at personalised learning and go back to the Hayward review and the work that has been done around project-based learning, I think that that is the real opportunity here for personalised learning experiences to be opened up in the classroom. Certainly, that has been the evidence and the experience that I've seen in the work that we do with project-based learning and the 35 schools that we're working with. Are young people choose the topics that are related to that challenge and it's that curiosity that has driven them to then acquire a qualification and get accredited and assessed in that format? I agree wholeheartedly about the project-based learning, but again, it just comes back to the issue from a practical point of view within a school, how difficult it is to instigate project-based learning with needing to get through the curriculum to get to an exam to do this to that and the other. Again, it's about a bigger picture. Pam Duncan-Glancy, do you have some questions around the scene? I do. Thank you, convener. Good morning and thank you. That's been fascinating. I'm quite enthusiastic about the role that AI could play, but I also understand the risks that we've touched on some of them. I know that colleagues may wish to drill in a bit more, but I came to know your thinking on how we can make sure that AI supports equalities and addresses inequality rather than exacerbates it? I know that Judy will pick up on that as well. Again, there's a lot in that question, part of this is around access to technology and I suppose maybe picking up on the last question as well. One thing that we didn't touch on there is, of course, the real power of AI-driven tools that are doing a power of work in some schools to support children with additional support needs in terms of the technologies now that can read screens to young people, that can provide feedback on work, can support with spelling, grammar, punctuation, and all of these basic skills and all of these important skills as well. All of these things, I think, are important. The other part of that, I suppose, at the other end of the spectrum, and we've touched on this already, is the bias that exists within AI at the moment, because, of course, if we take something like chatGPT, it's really fed off the internet around that from before 2022. We've already touched on the misinformation that would be in there as well, but actually, if we think about the internet and how human beings are represented like within the internet, most of the articles tend to be in English as a result of that. A lot of the articles are generated from Europe or North America. It tends to be a very male-driven environment around that. Of course, there's a bias then with the responses that come back. It's a big and complex area, which I think that we really need a strategic plan around covering the whole spectrum of these things, from tools that support personalised learning, but also making young people aware of the ethics and the other aspects of support that we've already discussed. The point about the bias is actually—I haven't heard it put like that before, but I have heard the kind of closing the bubble of information around people, but that's a really indesyn angle on that. Professor Robinson, did you? To extend what Ollie said, I'm going back to the idea of commercial tutoring systems, which use AI. There has been research about systems that are deliberately designed to present the information in a different way or to give the students, I suppose, to guide their learning in a different way, depending on particular characteristics, including their socioeconomic status, because it is known that, statistically, large-scale particular groups might not perform so well in maths. Using personal information about each student to say, well, maybe we should then tailor the information to you in a particular way. I think that that is something that we really need to take a view on. My concern is that it might be possible for, say, local authorities to make IT procurement decisions about such tutoring systems, because they seem good, maybe they help with learning, but such tools aren't evaluated, and if they are systematically have features that are designed to mitigate bias, they might actually be doing something that we don't want them to do, so I think that we need a lot of scrutiny about such systems. I suppose that I can give you some context to that of one of the schools that we are working with at the minute and some of the learners in that. Very often you will have in a class students who, for whatever reason, will look at a new project and immediately I can't come to mind what I can't write, I can't think, I can't. Where we have seen some really interesting work, particularly with those young people, is when you support them to think not necessarily in terms of there's been a big chunk of work, but let's start with a prompt. An AI and the ability to generate the correct prompts is going to be a key skill for our young people, and that is about being succinct, you know, being accurate in the questions that you're asking, making that from an informed point of view. What we're seeing is that when are young people who traditionally maybe would have struggled with English in the format that it is, but they narrow that down to a prompt and they work with the teacher to put that in, and they see evidence of something moving them from that immobilised stance of I can't to possibly I can, that that is where I think there is power and equity in the learning experience. I don't have a lot to add. It is a massive tool for differentiation. I explain such and such like I'm five, explain it like I'm in high school, primary school, university. It's very good at sort of taking complex topics and breaking them down. It's still a bit kind of back and forth typey type of a chatbot at the moment, but when the technology advances, that's the kind of thing I would hope would be more integrated into our new way of doing education. I felt maybe at this point it might be appropriate just to point out when I did a survey of school pupils in terms of their actual interaction with AI. In the region, the vast majority of them, in terms of who has used it, 70 per cent are saying that their interaction is nearly primarily or is primarily Snapchat an app on the phone. Again, a lot of the stuff that we're talking about here, perhaps we're coming at this in terms of a pupils perspective, most of their interaction is chatting to what's called a friend, my AI friend on a phone and rather than chat GPT, which came much significantly lower in the ranking. Is there a room for one more? I think the point about that and about how young people, I think you started with this point Ollie, about how they trust these systems because they use Alexa or they use these things all the time shows the kind of leap I think that needs to be taken till I was to use AI in the way that we've heard. In terms of the work on inequalities and how we mitigate some of what you described earlier Ollie, is there anything specifically going on just now within Education Scotland or elsewhere that you're aware of? So there's nothing going on specifically around AI and inequalities in Education Scotland at the moment that I'm aware of around that. There is obviously work that you'll be aware of going around equalities in wider on that and that does have a technology lens to it, but at the moment we're not doing any specific work around AI and equality. As the follow-up question, should we be doing some work on that? The answer is yes around that. Thank you. Does anyone else have any understanding of work that's going on elsewhere or in the system that could support development in this area? I think that there are modern studies teachers in a good position and other teachers who work with children in other areas of equalities because if you learn to spot biases which humans have, you can also include the output of AI, so it's one of those things where working across a curriculum will be useful. Can I now move to Ben Macpherson? Thank you, convener. Good morning all. Thank you for your time and your fascinating insights so far. I think your points around the necessity of judging AI on its data source is so important. I heard recently that even if the technology is perfect, AI will never be perfect because it's reliant on the data within. That critical thinking to apply to AI and the data set that it relies on is going to be so important. You are also rightly cautioned about the warnings and there is some reporting that I have seen that in the United States of America and Silicon Valley, for example, there are schools where they don't even use computers or tablets because they want people to learn those wider creative skills with a pen and paper, but it's also true that this technology is here and it's going to be a big part of the future and you've made those points. I want to ask some questions around utilisation. Mr Ranson spoke wisely about how there needs to be a sense of this is something to be used, but we mustn't think about learning about AI as how to train young people to fit into an AI economy of the future, but AI is going to be a big part of the economy of the future. How do we get that balance right? What skills are required to use generative AI and when should we bring them into the curriculum? Perhaps Professor Robertson would like to go first, because you talked about how, as a computer scientist, everyone should learn those skills. When should they learn them and what should they learn? I don't really know the answer to that, but I can speculate a bit. At the moment, in the technologies curriculum, we have strands about understanding the world through computational thinking and those skills can start in the early years and go all the way up. I think that it's kind of adapting understanding about, I suppose, the difference between knowing about how computers work and how those large statistical models work. I think that there's a bit more math that you need to learn there. That's some of the knowledge that children can learn and we can knit that into what's already taught in maths in the computing curriculum and so on. Then there is a set of skills of learning how to use AI appropriately and wisely, and that's already something to a certain extent that is taught in digital literacy in using technology in a responsible way. It all comes back to the kind of core things that we want to be teaching anyway. Literacy in the way of understanding and being able to evaluate information and sources is always going to be important. I think that the character values that we have here in Scotland are really important about being respectful of each other. I think that that's something that teachers are great at teaching already. The case of misuse, which you mentioned in Spain, is supported by technology to find another way to bully people, but probably the way to get to the root of that is through talking to people about responsibility and respect. I think that we're doing quite a lot of what we need to do already and we just need to find ways to adapt it to the AI context. Obviously with the OECD there's been a lot of work done around this as we all grapple for answers, but I just wanted to read one of the paragraphs in terms of the skills. What they said was that the skills that are needed are for humans to be able to collaborate with machines in a way that both amplifies their own intelligence and celebrates their humanity. I felt that that was a really interesting way of summarising up. Lots of work done with Skills Development Scotland around meta-skills, we know the importance of that. Our curriculum needs to take a fundamental shift to be able to not have that making a guest appearance in the curriculum, but actually make that centre stage of how our curriculum is developed. It is coming back to that thing about recognising and celebrating what we're born with, our own human operating system. AI is a tool there that will enable us to amplify what makes us uniquely human, but it should not be the thing that leads the way. Why did the world economic form predict that for our future workforce? It saw it coming, and those are the skills that need to be in our curriculum right now. Creativity across the curriculum, but in terms of ICT skills, I don't remember when I was at school, you learned basic operation of both Mac and Windows systems. Do we need to learn chat GPT in S1, for example? I don't mean to pin you down, but where are we bringing that in, when? Before Mr Bray comes in, I don't know if you did you want to add anything? What we're saying is that, first of all, going back to teenagers, as soon as you say to any teenager that you cannot do something or do not do something, it becomes incredibly attractive. That is already happening, whether we want to believe it or not, that our young people are using AI in that way. What we have to look at is where it sits, so it goes back to that question. What if we get it right? Where is the right place for this to sit in that creative experience? I think that's where we're asking our young people, does it help you to tell a story better? Does it help you to iterate and innovate better? It's gone back again to those uniquely human parts, where instead of having a fixed guideline that would be redundant in the next 48 hours, we have to go back to what we do that makes us human, and where does that fit and support us to develop that skill? I think that we need to change the narrative a little bit, and I don't mean that in a tokenistic way. At the moment, we talk a lot about AI and education, whereas we should be talking more about education in the era of AI, which is a term that I've shamelessly stolen from the MIT Media Lab around that, who, in answer to your second question, since 2019, have developed what I consider to be a brilliant K-12 AI for education curriculum, which starts off really from very, very early primary with hands-on experiences, so that young people have developed creativity skills, but are also interacting with different types of technology, and the young people are learning about different types of AI that exist in the products and services that they would typically come across. For example, YouTube search algorithms interact with smart speakers, as we've already talked about, and I understand, although I've not checked it recently, but I understand that the Media Lab has now updated that curriculum as the young people get older to include things like generative AI and other things around things like prompt crafts. I think that all of these things can fit in at certain stages. I think that one of the dangers that we just need to be really, really careful of is that it's got on this sometimes that we implement something, and then we assume it's being done. Whereas, actually, as these technologies become more and more prevalent across all of society, we need to keep updating the curriculum all of the time, because young people move on in terms of their experiences and in terms of their thoughts, so curriculum quickly becomes outdated, particularly when it becomes to the basics. I'm presumably update, and make sure that there's continued professional development for staff in the space. Absolutely. Mr Anson, did you want to add anything? I was chatting to Judy earlier about this kind of stuff. One of my biggest concerns is the potential of creating what I would describe as a cognitive crutch like early on in someone's education. If we gave total free access to this machine that can do a lot of the thinking for you, are they still going to learn the basic processes that everyone needs to learn? I think that that's maybe what they're doing in America, taking things out of the classroom, so I can see where that's coming from. Equally, this is well outside my area of knowledge in terms of psychology, but I think that we should be thinking about again how do humans learn and how can AI help that process and not flip it on its head. I have a friend who works for a major AI provider, and I've bounced a lot of those ideas off of this person. One of the things that he said to me was that AI has proven once and for all that rote memorisation is pointless. That's the kind of potentially dangerous thinking that can come in from such a powerful technology when we're going from the technology back to a human rather than how am I going to teach my toddler to speak the alphabet, whatever, without some basic elements that AI could do better. Certainly, Bill Gates thinks that AI can teach the alphabet better than teachers can. Potentially, again, there are ways that can be used, but I'm just citing a word of caution that, again, it should be AI fitting to us. I think that that would go with what Ollie was saying. I like that phrase, education in the era of AI. Just taking that on, education in the era of AI, what are the implications not just in schools but for colleges and universities? Is there enough cross thinking going on in your view between employers, whether it's Government policy in terms of the AI strategy or education in Scotland in terms of your connections with the other Government departments? Are we collaborating enough on that in terms of considering the next stages for our young people? If I may read into what you will, the connections are there, but the collaborations could be greater. Helena, do you want to comment on that? You were nodding away in agreement with Ollie before I move on to that. I go back to the statement, because AI will not take your job, but a person using AI will. It's a statement, and I'm not sure I necessarily agree with it. What's important at the minute is to also recognise the job insecurity that AI brings, not just for teachers and anyone working in education. I think that there's a real opportunity for AI to enable our educators to elevate them, because it's going to become more and more important for our teachers to be there, to have that role, to empower our young people to be educators, to be curious, to be innovators. I think that the insecurity that AI brings for all of us in the room, when we look at what it can do, has to be recognised. It's across industries, and I think that never before has there been a better opportunity for us to all say, we're on a level playing field here, let's come together and look at what the answers are that we can come up with. That will have an impact on the future workforce. So every student in S1 at the moment, by the time they leave school, whatever age that is and whatever job they go into, they'll need creative thinking. But also AI will be in almost every work setting, if not every work setting. So are we moving quick enough to equip our young people with the understanding of AI, or do we need to really step the pace up here, due to snotting? Could you step the pace up? We're behind where we should be. Bill Kidd, over to yourself now. Thank you very much, convener. This has been really, as has been said, it's been really interesting, the different angles and directions that things have already taken and will go on to take, but forget about all that. This is a parliament, and we are politicians, and when it comes to that, we have to think about policy making for AI. And it's been mentioned before about curriculum reform, which gives us a wee opportunity to stick our own water there, I think, while we've still got the right to do so before AI overtakes us. How good already is the guidance for educators and researchers on how to use AI ethically and effectively? I mean, because you've covered, very broadly, you've covered how it exists already in our society, and in particular throughout the educational perspective, but is there enough guidance from government for how educators and researchers will use this AI, and is it right that government should do that, or should it develop outside of that? I think that in terms of research at universities, the ethics of that will be covered by institutional ethics committees. I'm not sure if that's so much the case for industry. As far as I'm aware, there aren't guidelines for how to ethically use AI within education, and I would welcome some kind of government guidelines on that at the very least. I think that government needs to take some kind of a position on this quite quickly. I think that there are definitely partners in the room who could say something about that, so the universities would be keen to offer advice on that. But I think that there's a vacuum at the moment, which is slightly worrying, because things are moving so fast, as Mr McPherson said. So, you do believe that there is something there, but not very much, and it does need to be improved and expanded on then? I'm not aware of anything about ethics and education. I don't think I don't think I'm okay. Helena, sorry, I had something nodding away there. I was saying this to the team that we were lucky enough to be invited to Helsinki last year, and we went to one of the schools to look at particularly how project-based learning is embedded in the curriculum and how they make that work. It was fascinating, and at the point in the staff room, someone said, well, this is our ethics teacher, and every school has an ethics teacher. I thought, you know, that was interesting. I do think that in our training, our young teachers, our new teachers coming through, that absolutely needs to be part of their experience and their curriculum. Every teacher needs to understand that and have an awareness of it, to be able to bring that into the classroom and support our young people with those ethical questions that are going to become more and more relevant. That's very interesting. I agree with both around this, but I suppose there is Scotland's artificial intelligence strategy, and in the subcontacts that is trustworthy, ethical and inclusive. That was published in March 2001, and it's a more generic policy around that. Now a lot's changed since 2001. Is there anything specific for education at the moment? Well, there's a lot of stuff out there for education, but it's not driven from the centre around that, so it's quite sporadic advice, much of it very, very good. For me, there's a feeling that we need to draw some of this together. I agree with Judy's point, that it would be sensible to do this sooner rather than later to try to make that work. When I started thinking about this before the summer, I thought, right, by the time I get to Christmas, the Government will have released quite clear guidance on this, and I'll be adapting everything that I've done to fit in with that. I'm not trying to be flippant about that. It's not clear that there's anything really for me who's looking for stuff to grapple with. If it was down to me, I would be making this on a school-inset day or something like that, mandatory. There's some mandatory information that you need to know about this technology that exists right now, and things are happening, even just so that teachers are aware. You'll still find loads of teachers who just have no idea. On the back of that then, these questions are around curriculum reform, I suppose, and how AI impacts on that. For those who are aware and those who are working with AI, that's one thing, and even then, as Chris says, that's still got a lot of elements to it that you've got to work your way through. But some people won't have any awareness or depth of knowledge, and they're supposed to be leading the young people through, so that would be something that really needs to be imposed, to be impacted on and fed through, isn't it? I suppose, just to make a point on this, and I know that everybody here will be aware of it, but it's important to say it, that within the Haywood review, there is a recommendation around just this, what you're describing, which says, establish a cross-sector commission on education of artificial intelligence, as a matter of urgency, Scottish Government should convene and lead a cross-sector commission to develop a shared value position on the future of AI and education, and a set of guiding principles for the use of AI. Of course, we need principles before any legislation, so I think that that's an important thing. Again, frankly, if I may, my worry at the moment around Haywood is a lot of the discussion is around the Scottish Diploma of Achievement, which is important, and there should be discussion around that, and a lot of the discussion around AI is around assessment and cheating and exams, but there's a very, very clear recommendation in here that I think that we should be taking seriously. That's really helpful. Thank you very much, thank you all. Just for clarification, because Mr Bray, I'm not, you know you stated the strategy was 2001, it was 2021. Sorry, 2021, sorry. Quite a big difference in 20 years, and even then, in the last two years, a lot's happened in the world of artificial intelligence. So can I move to questions from Willie Rennie, please? So it's really just following up on what you've just been talking about with the Haywood review, because there's a move in there away from exams and more-to-water assessments, not clearly defined exactly how far that will go, but there's many concerned in the education community that, with the rise of AI, that somehow that would be a retrograde step, and we should be sticking with exams that are conducted within sanitary conditions, which are, I see, delayed away from the technology. Do you have views on whether that's right or not? Who would like to? Judy? Yeah, so I disagree with that view. I think it's a retrograde step. I think we're moving with the assessment reform in the right direction of travel to be able to assess and value the sorts of skills that Helen has been telling us about. I think that moving to invigilated exams more is almost like a panicked step, and it's not what we want to do educationally speaking. It might be convenient, but it's not going to take us where we need to be. So I think there's no simple answer to this, because all of this is really complex around that. I 100% agree with what Judy is saying there. I suppose it's just to help us understand that young people could be completing what might be considered a traditional example on the computer, but they can be doing that with no assistance from generative AI or the internet or anything like that. That exists already, so the notion of going to completely hand it in exams, I don't think it's palatable in 2023 and beyond. I think we have to be a lot more forward thinking about this, and I would again also encourage in terms of the approaches to assessment, which is reinforced in Hayward, that there are ways that technology can help us to develop these approaches to assessment. For example, I think there's a place for multiple choice questions, like we see that already in some of our national qualifications. There's a place for computer aided assessment around some of those things in terms of that on-going assessment, which would also reduce workload. There's a place for completely reimagining what assessment might be using some of the technology tools that we've been talking about this morning. Again, I think it is a backward step. I think that we have to look at what our employers are looking for at the minute, and I very much believe that it is about the application of the knowledge. We had a young person recently who went for a job to one of the bigger tech industries, and the first question that the person was interviewing them said, I don't care what you've got, tell me what you can do with what you've got, and we really need to prepare our young people for those kinds of conversations. Sining in an exam hall, regurgitating a set of facts on a certain date and a certain time in the year is not giving them those kinds of skills, enabling them to create situations where their voices can be heard and they can develop up a story that enables them to stand in front of an employer and communicate effectively, talk about being able to deal with failure and being resilient and collaborate. Those are the skills of the future workforce. Yesterday, in the education secretary's statement, there was a move in terms of maths towards a greater emphasis on knowledge. Surely there needs to be a foundation of knowledge that needs to be assessed in an independent fashion? I accept completely your point about skills, but surely knowledge shouldn't be undervalued in all of that? I don't think that it has been undervalued. I think that it's just been looking for a different way to apply that knowledge. I would go back to what I said earlier about our examination system, the metric. Examinations are there to measure how well someone has attained whatever we are trying to achieve in education, and instead it has become the target. In its current form, even aside from AI, AI is forcing us to look at it, but even aside from AI, I think that that's an issue. Is there an issue with sitting in a hall to show some kind of knowledge to be examined in that format? I don't personally see an issue with that if it is used appropriately. I would just pick up on the question about knowledge on that. There's no doubt that knowledge is important and, of course, it's impossible to separate knowledge from skills. It's an argument that we go around in it. The real question is what knowledge is important to young people, and why is it important to young people? Of course, what is the knowledge that is being taught in schools? I know from previous experiences as a head teacher that sometimes that knowledge that young people are learning about is not appropriate and is an inappropriate use of cognitive load. I remember many times going into a science classroom and seeing children and young people labelling science apparatus around that. The argument for that is that young people need to know the names of the science apparatus. I think that everybody here would know what a Bunsen burner was if I put that on the table, but you didn't learn that from drawing it and labelling the parts that you learned from using that piece of equipment around that. It's thinking about these things in a sensible way. One of the challenges that we have quite often in education and in the examination system is that we take a simplistic view of knowledge, because it's easier for children and young people to understand. To give you another example, I've mentioned earlier that I was a geography teacher. I've taught hundreds of children how a waterfall is formed, but that's not actually how a waterfall is formed. It's far more complex than the system that goes through, because we take a simplified version of that knowledge quite often in education, because we make things simple. That's the bit that we need to aggress in this next stage of education reform. The second question I've got really is about assessment. During the pandemic, there's been a big debate about teacher judgment and the role of that. There's also a debate about SNSAs and national testing, producing league tables, whether that creates the right dynamics. We've already had the discussion about exams and how that affects that, too. Is there a way that we can use artificial intelligence to assess, in an independent fashion, to give us confidence and to introduce accountability into the education system that doesn't create all the negative effects of SNSAs? That's a very difficult question, but it's an interesting one. If you had a human in the loop or possibly an AI in the loop, you definitely wouldn't want to trust an AI to do that by itself. However, if you could be sure that the AI wouldn't be subject to human biases, I think that it could be part of decision making. Designing that system would require a lot of thought, but I think that there's some possibilities there. No, I agree. I can't see technically why it wouldn't be possible to be able to do that. There are wider questions around SNSAs and how we use them at the moment. Again, everybody will be aware here about some of the recommendations from the OECD about going back to sample-based, rather than individual-based. I think that's an argument that we've not completely bottomed out. Of course, with sample-based using technology, that becomes far more possible because it's smaller scale. Anybody else? I think that it depends on what you're assessing. If you look at it in a simple way, if you're assessing somebody, have they put the amount of facts in, how many times have they mentioned that word, then there are possibilities. For me, it would be a step backward. It comes back to why I think that teachers' role is going to become even more important. Your ability as a teacher to assess that work, put context to that work and really know that individual and the story behind them will become more and more important. Yes, there is a role for it, but I don't think that it's the way forward. I take your point that the role of the teacher is incredibly important. The dynamic is created where you're competing through an SNSA system, which the Government doesn't like to say that there are league tables produced, but there are league tables produced on the back of this. That kind of relationship that you want to have with the pupil is corrupted, so is there not a way that we can make sure that policy makers and Government and education leaders have the confidence that things are working without creating all those negative dynamics? I can only share in the context of the work that we're doing with the schools at the minute in the creative thinking qualification. That's a qualification that is UCAS tariff points. It's at level 6. We've been able to create an assessment model that does assess creative thinking. I brought it along today. We have a simple stamp that has the learning outcomes on it that the teachers use as formative thinking. We've created an app that enables them to assess creative thinking in that way. It is important that you don't want to come home when your child at 17 goes, I think that I'm a red and they want to know that I'm an A, a B, a C or a D and that that has validity in terms of the future steps into universities. We've been able to do that, make it robust, make it straightforward and then enable our teachers to get on with what really is important, the learning and teaching experience. Ross, do you have anything that you might want to put to the panel this morning to put you on the spot a bit there? There's a couple of questions that I was interested in in terms of how, particularly the ethical questions that we're talking about right at the start, how that marries up with what Willie was saying there on exams assessments, how we measure that in school, because it's one thing, Chris, you gave the example of, it's one thing to be able to tell if a pupil has used something like chat GPT to help them with an essay where there is a right or a wrong answer. They've either got the name of the historical figure right or wrong, got the date right or wrong. If it's something much more subjective, it can be harder for staff to drill down and to tell that, even if you know the pupil well, depending on the AI system you're using. So I'd be interested in how we can produce advice on that, where it's getting into territory that becomes incredibly subjective to be able to distinguish between what the pupil has produced and what the AI might have produced, because there is no factual right or wrong answer for you to be able to check the hallucination points that you were mentioning. I think that subject specialists would be the people I would go to for that in terms of that will look different for each department in a school or university or whatever. However, the long and short bit must be that we need to change our assessment methods that, if you're trying to make some sort of formal assessment of how well a pupil has learned something, there are now certain assessment methods that you cannot rely upon. Just knowing that and then forcing, that does force you to consider other things, like I said, like a viva, or when they hand in their work, the pupil should expect to be asked questions about it, all sorts of things, I'm sure that Judy's got more to say on that. I think that you said it before, actually. It's the questioning and the idea of maybe having like a viva, an oral discussion with the pupils about why did you choose to make that argument, did you think about this, or for me when the students are writing code, why did you choose that design rather than that design? What were you thinking at this bit here? So it's comprehension questions and it helps to assess what the person understands rather than just what's on the page. It's time-intensive to do that, but I think it's a really valid way of assessing learning. Just on that point of time, I'm interested in your thoughts on this, Chris, in particular as a teacher and all the way from your experience in the classroom. Given that there's, realistically, in the system, there's never going to be the capacity for the teacher to be able to do that one-on-one with every pupil. There is probably the capacity to do that in group settings where the pupils are able to essentially cross-examine each other. Observed by a teacher who can then, if that's raising red flags about this pupil currently, doesn't have the comprehension of what they've presented, is there a way to develop that in a group setting environment that addresses those work-code issues? We can all envisage a system where there is limitless capacity and therefore staff can address all those issues directly, but that's not the system that we have or that we're going to ever realistically have. Is there a role for cross-examination for the pupils and students themselves there? I think so. I know that there are people working on tools to help with basic assessment as you go through the year when you don't have access to a teacher. How can you tell how you're getting on? I don't feel like I've got much else to say on that, but it sounds like a good point. I think there are. I think there's a lot of scope for that, and I don't want to speak for Judy, but I suspect that that's a set of skills that are quite important when young people get to university for that bit about peer assessment and having those discussions, so the more that we can think about pedagogies and pedagogical toolkits in schools and how we develop, we use those pedagogies to develop skills around peer assessment. There were young people growing emotionally and developmentally, I think, is really, really, really important. That's a good question, actually. I think that that's a issue with quite good insight into how we could do this. I think that for formative assessment where you're trying to assess what the child knows so that you can help them in the next step, I think that kind of group inquiry is really useful. There's something called process-oriented guided instructional or inquiry learning, so that's almost exactly what you described, where the learners in the group have got prompts that they can ask each other, which helps to guide their thinking, and the teacher is kind of observing and listening into this and knows where to take the class next. I think that we probably need more formal things for the higher stakes assessments at the end of the year, but we don't have to do so much of that. I think that quite simply we need to recognise and reward the process and move away from our focus on an end product. That's great. Thank you very much. That's super. I'd like to thank the panel for their time and evidence this morning, and a number of you have said that you may have things that you want to share with us, so if you want to leave, if there's anything tangible that you've got physical, you can leave that behind. Before we move into our private session, I want to make note on the record that I have apologies from our deputy convener this morning, so I just need to put that on the record. That concludes the public part of our proceeding, and I will now suspend the meeting to allow the witnesses to leave. Thank you very much.