 Thanks, Corry. I'm going to chess green. So, hi everybody. I'm Sue Atwell. I'm co-lead for just the National Centre for AI and Tertiary Education. And today I'm going to talk a little bit about the National Centre and what we do. We look at where we are now and dip into assessment, student perceptions and then move into an activity. So the National Centre for AI and Tertiary Education, and hopefully that's the last time I'm going to say that. It's a bit of a mouthful, but it is our name. We were set up about two years ago to accelerate the adoption of AI across the tertiary education sector in a responsible way. Responsible for us is quite a key word. To an extent, the acceleration and awareness has been done by all the media and the rise of generative AI tools. So we've kind of flipped that to support the sector, but in a responsible way. So we like to think that we are making the sector aware of both the positives and the negatives and supporting them to make informed choices. And we do this in a few ways. We do a lot of thought leadership. We run some pilots and evaluate them. We provide a lot of practical advice, guidance and training. We do some influencing. So working with people like DFE and devolved nations and we have a very active community. So the report for AI and tertiary education is one of our thought leadership pieces, something we produce annually. The third edition has just been published September 2023. And we'll revisit that and aim to publish 2024 version in summer. But what that does is capture what AI is at that point in time. So what's happening across AI, what's happening in the sector and where we're starting to see emerging use cases and case studies of good practice. And just sharing some of our pilots. So we piloted body swaps, which is a VR, AI interact intersection tool. We pilot that directly with students. So we did a roadshow type approach and it allows students and module we chose to pilot was around employability and supporting students with interviews. So students could go through an interview in VR. They could then swap bodies into the interviewers body and see how they came across and get AI feedback on how to improve their performance. And also has a lot of wellbeing and medical modules within it. And we started seeing more and more VR AI tools come and particularly around the medical space. Anyways, we've just finished piloting that. That was quite a nice little tool that just uses AI to create audio resources, podcast type materials from your existing content. So another form of learning for the student with very little effort on the part of the tutor teacher. And teach a magic. We're just piloting at the moment. So mid pilot, we have eight colleges piloting this and we've just started a HG pilot. This really is the area we're seeing a lot of growth at the moment. So this is almost a kind of mid person. So this is a tool that sits on top of open AI. But in this case, it allows the teacher to produce a lot of useful resources, things like lesson plans, FAQs, multiple choice questions, learning materials without having to develop prompting skills for themselves. They can simply click on a box, add the URL, a couple of other answer a few questions, set the level and it'll go away and it'll create them. One of the really useful things I like about this particular tool is it allows you to set things like accessibility so you can tell it that you need the resources to be created because you have a student who is dyslexic or you have a student who is neurodivergent and it will adapt resources to cater for those particular needs, which then of course allows those resources to be available for all students because we all know that a lot of students find accessibility resources really helpful even though they might not officially have those needs. I mentioned the word responsible and this is some guidance we put out and we're just in the process of refreshing this going forward. But this is really about a set of guidelines for institutions to go through before thinking about bringing AI into their institution. We're making sure that people are doing it for the right reasons that they've gone through the process of thinking about, you know, does this use fit with our purpose and culture? Are we ready to do this? Do the staff have the necessary skills to implement it? And does AI raise any specific issues? You know, have we got the right consents in place? Are our students the right age? You know, can they use this? And then practical training. So we've got two mini MOOCs online at the moment. We'll be adding to this shortly, but an introduction to generative AI. So just practical course into what it is, how it works. And then another one into AI and ethics and they're both short. They're on a monthly basis. They've got very active forums as part of it. They're proving very popular and they're free to just members. And we're just at the moment working with our building digital capabilities tool. So people who use that tool will shortly see an AI section embedded within that tool just to expand its offering. And then this has probably been our most downloaded publication. We're very obvious with our names as you can see. So generative AI primer, it's exactly what it says. If you've never used generative AI tools, this is your starting point. It'll talk you through how to get started, the kind of things that's good to start with, considerations, what not to do, how to do things like how to use it to create images. It's really good, really simple, really clear. And we've had lots of really good feedback on it. I mentioned we've got an active community. We run monthly-ish community meetups. I know these dates are out of date, but the QR codes will take you to sign up for the next sessions, which will be in January. We're not doing any December meetups. Christmas, of course, is much more important. So looking at where we are now. So we've had generative AI around since 2018. And it was just quietly developing in that normal, slow development phase that all technologies go through. And then in 2021, we started to see generative image tools come on, which started to spark a bit of interest. Then we had a few more of those tools and a bit further development in 2022. And that interest started to build. And then we had chatGPT in November 2022, which just blew interest out of the water. And then in 2023, we've had lots of new announcements. They continue to come. And that's the thing that's different with this. It's a technological development. It's a new tool like any other tool. But the sheer speed of change is outside anything we've experienced before. And we're not seeing any signs of that slowing down at the moment. So that presents a particular challenge. So sorry, I thought I was going to sneeze them. ChatGPT. So just looking at that specific. So this is the free version I'm talking about now. It was created by a company called Open AI. Contrary to its name, it's not open. It is a commercial company and other competitors do exist. It's trained on large chunks of the internet plus some books. Humans help with the training by providing feedback. So every time we use it and we use things like the thumbs up, thumbs down, they're helping it to improve. And all it's really doing is predicting the next word based on the sequence of preceding words, which sounds quite simple, but is actually quite effective. It can write plausible sounding text on any topic. So if you've never used it before, please use it to explore a topic you know enough about to judge whether the outputs are correct or not. For all these tools, the human review part is critical. I would never use any output from any tool without reviewing it first by self. It can generate answers to a range of questions, including coding, maths, multiple choice. And it's getting increasingly accurate and sophisticated with every release. It's also getting increasingly expensive. And it will generate unique text every time you use it. So when people talk about things like ChutDPT passed the law exam today, I wonder quite how many times they have to try before it passed. And if they just tried again five minutes later, it would have failed. It's created a wide range of tasks, particularly things like text summarisation, changing the tone of things, limitations. It can often does generate plausible incorrect information. And this can include false references and false claims about its capability. So when we had these tools first talked about, I sometimes see things on Twitter that would make me cringe. I see teachers saying something like, I put my students' work into ChutDPT and it told me it had generated it. It mostly will claim authorship of practically anything you choose to put in and ask it the question, did you write this? It likes to say yes it did. It has no clue whether it did or not. And you've probably done, if you do that, is given the students' work to a generative AI tool training model. So ChutDPT, the free version, is only knowledgeable up to January 2022. It goes absolutely nothing since then because it has no access to the internet. So concerns, we mentioned it's trained on the large chunks of the internet and a few books. Large chunks of the internet includes large elements of things like Reddit forums. So it can and does produce biased output, culturally, politically, every kind of way you can think of really. And the free version, it can generate unacceptable outputs occasionally. It's got high environmental impact. So large water and power use. And there are some concerns around the human impact. A lot of human training and fine tuning is done on these models. That's often done by poorly paid people in the developing world. And is a danger of increasing digital inequity. We shall explore later. But generative AI is much more than ChutDPT. And these are the six most common uses of the moment in education. So chat, search, images, coding, writing and content generation. And these are just a few of the examples of tools that are out there. But there's many, many, many more out there. So just an example. ChutDPT plus, that's the paid for version currently $20 a month has several hundred plugins growing all the time. But it has some really useful plugins, particularly for education. So things like Wolfram, which is a really good maths tool, and Scowa AI, really good research tool. Lots of examples. And again, we're looking at that digital inequity. So we've talked about this a couple of weeks ago. There's more information on our blog site. But just coming back to that issue of ethics and t's and c's. Is everything you're putting into these models being used to train the model? Well, our blog goes into more examples with chat GPT. It does unless you opt out. With Google Bard, all your inputs are used to train it. With Anthropic, nothing is used unless you specifically opt in. And with Bing chat, your inputs are used to train the model. Now, if you are a Microsoft institution, you will have noticed recently that, and if you've got E3 or E5 licenses, that you probably got Bing chat enterprise. And if you've got Bing chat enterprise, please use that because that means that it's covered under your standard enterprise license inputs are not used to train the model. IP remains with the institution. And it's a much safer offer. And there's an age requirement, which differs according to sites, which again, you know, can be an issue, particularly for colleges that are dealing with under 18 students. Obviously can't control what they're using, but it's something to bear in mind when suggesting usage around that age limit. And the speed of change. We're still getting announcements all the time. So we have GPTs introduced about two weeks ago where you can create your custom versions of a GP chat GPT. And again, we've got a new blog that's just gone online today, talking through how to do this. And then last week, we had the announcement that chat GPT plus and has now gone voice enabled. So you can chat to it just in the same way as you do Alexa and Siri. So we're now in a space that we call in everyday AI. And that's because AI is getting integrated into our everyday tools. So, you know, when you pick your phone up and do facial recognition, it's AI that's doing that recognition. And if you're using the designer element of Microsoft PowerPoint, it's AI that's suggesting those designs. It's AI that's again, you know, I use Google Mail, Gmail for my personal communications. And every time I go to do a new email, it asks me if I want to help writing it. And I can choose yes or I can choose no. But it's just starting to be a part of our everyday life. So it's getting increasingly hard to take and avoid approach to this. As I mentioned Google, so now within Google Docs, if you open up a new Google Doc, you get a little thing which asks again, you know, would you like help writing and you can choose yes or no. And if you choose yes then you get this little pencil and star. And you can put in what you like. But here we're putting right an intro to AI and education, just a very simple request. And it's produced a page of text within about half a minute. And then it allows you to refine it. So there's a button on our bottom that says so recreate, you can say try again, don't like that. We've talked before about it giving you unique output every time you ask. So you can just recreate it or try again. Or you can refine, you can ask it to change the tone to be more friendly, to be more creative. So, you know, these tools are stopping really easy and really helpful in reducing some workload. And I thought I'd just focus in on Gamma.app, which is not one of the big four, but is a really useful tool. So hopefully this is going to play. So you'll see it produces PowerPoints. You can say, I would like AI help. I want a presentation. Put in your topic. I did this the other week to have a play. Well, I asked it to do employability skills for students in 2030, because that's my old job. And it will then come up with a suggested structure. So when I did this, I changed the order of a couple of those bullet points because I thought it worked better that way around. So then choose your format or not. It will surprise you. And then this is real time. It's just screen recording. It produces PowerPoint decks with appropriate images and with things like graphs and visuals. So for me, it created a very snazzy, I would say, PowerPoint, probably better than some of my PowerPoints. This is one of my PowerPoints, so you can judge. But I then spent 20 minutes reviewing that and I made a few changes. And then I would have been quite happy delivering that presentation. I mean, that human review is key. I would never, ever take any kind of output and just think, OK, job done. But what that had done is enable me to create a PowerPoint. I would have been happy delivering in around 35 minutes. When I'm creating my standard PowerPoints like this one, probably a couple of hours, maybe a little bit more if you get lost down that black hole and looking for appropriate pictures. We must all do that shortly. It's not just me. But that's quite a time saving for me. That's a huge chunk of time that I can then do something that's probably a lot more valuable than croaking PowerPoints. Bing chat and chat GPT can now read pictures. So you can do things like put a diagram in and ask it for suggestions for how to improve it to make it more accessible or just how to make it more easily easy to understand. So what we're seeing across the sector, we're seeing a lot of focus around assessment and policy. We're seeing initial assessment guidance for students produced and some good examples. We're seeing staff starting to use it for process work, so things like grant applications, lesson planning. We're starting to see early thinking about how to use it in learning and teaching, how to think about it more creatively. We're seeing widespread use by students and broad acknowledgement that work will change but limited action on that at the moment. We're increasingly starting to see AI within assisted technology. And that's, I think, to be really careful about when writing student guidance and thinking about an approach. If you outright ban all AI, you will be banning a lot of accessibility tools because they used AI to work, things like Grammarly. And AI accessibility is coming on really well, so things like screen readers, automatic captioning, voice detects are getting much more accurate all the time. So looking at assessment, we've obviously got this spectrum. What's acceptable, what's not acceptable is something we're encouraging all institutions to discuss. It is not for us to decide what's acceptable and what isn't. It will vary across institutions. And a lot of the time it will vary across subjects. What might work in one will not necessarily be allowed in another. But we had lots of not necessarily helpful press in that when chat GPT first came in about it killing off homework, killing off exams. It's a lot of doom-mongering probably. But when we talk about assessment, contrary to the media explosion, the idea of changing assessment is not new. It's been around, as long as I can remember, if I'm honest, people have been talking about the need to change assessment and make better assessment design. I'm just capturing this from a report that was written in 2020. Pre-generative AI calling for change. There's three real tactics. So you can choose to avoid, which means that the only real option is in-person handwritten exams. Not very good for student wellbeing, lot of resource implications, not very good in moving to better assessment design. You can try and outrun, which means you need to devise an assessment that AI can't do. However, AI is advancing so rapidly and changing so much that if you take that route, you have to be prepared to continually refine your assessment, because what AI can't do now, it probably will be able to do in three weeks. So you're taking on a huge workload if you take that route. Adapt and embrace is the model we are promoting as a centre. And it's just embracing the use of AI and starting to think about how to redesign assessments, starting to think about things like authentic assessment. I mean, I think we're all here for the same reason, which is really to best support students. And students are effectively living in an AI-enabled world, and they're moving into work or more study in an AI-enabled world. So have to think about how we support them best to thrive and move into sustainable employment in that world. So is it possible to detect AI? Short answer is no. Long answer is, well, kind of, but not that effectively. So there isn't any system today that can conclusively prove anything's being written by AI. Of the ones that are around, Turnitin is the most effective. It's low 70%, low 70% effective. So around 26, 28, 29% false positives, which is a huge workload, a huge requirement around investigation. And one of the concerns is that a lot of students are using AI to improve their writing. That's how I use it a lot. I write very passively. So I use AI to make my writing more engaging and to help me improve my writing. And if you use AI to get feedback and improve your writing, it's high. And also, this is particularly an issue for students who are non-native speakers, where they're using things to make sure they're writing things correctly. AI will pick up their writing style as being the same as AI writing style because it's very, very similar. It's unfairly biased against non-native speakers. All of the techniques can be defeated and they'll all give false positives. So bearing all that in mind, one of the things we thought we might do is to come up with a practical resource into how to think about changing assessment. So we did this interactive PowerPoint and it was one of our working groups from our community that developed this, led by Isabel Bodic at UCL and developed with colleagues from a range of universities, came up with this idea of how to change assessment. So there's around 46 different types of assessment here. It relates to Bloom's taxonomy and they're all rated on some key characteristics. And where it's appropriate. So this is the assessment categories the toolkit breaks it down to. The things like control, condition exams, practical exams, quizzes and in-class tests, and open book exams. And if you click on one of those categories, it can then take you to this, they're all color coded, but if you click on open book, it'll take you to this purple slide. And you can then choose one of those categories. In this example, I chose visualizer concept down on the bottom left. And this is the card it'll give. And at any stage in the kit, you can click on either go back on to the open book category or back to the index and explore. And there's around I think 46, 48 different suggestions on that. And then just looking at student perceptions. So last spring, we did a range of student discussion forums with both college and university students and then produced this report, student perceptions of generative AI. And we're currently in the process of updating that some running a range of student forums between now and February, having those discussions to find out what students are doing and what their hopes and wishes and concerns are now. So how are students using chat GPT and other generative AI tools? So this is the picture when we did these last spring, which is around research tools, exploring concepts, getting started, you're getting over that blank sheet of paper, making suggestions for them about different types of things they might consider to give them structure, particularly where they've never had to do a certain thing before, to get feedback on their writing and as a safe proofreading element. Students, particularly non-native speakers, are using it for translation and rewriting sections. And they're doing that because of how it works. It works on a very conversational style. So the student can ask the same question in a different way if it doesn't understand. So they're often using it to get an understanding of what's being asked for them to make sure they can respond appropriately. They can ask the same question 18 times in 18 different ways to get that understanding. Most students don't feel confident and that they can do that of a tutor or in class where there's peers. So it just makes it really easy for them to have that interaction and support at any time of the day. They're using it to iterate and discuss and to create images to support things like portfolios. And occasionally they were using it to get people pasting the question and getting the answer out. Things seem to have moved on with this round. We're early in this round. I don't want to be too precise, but it feels like students have moved on from using it as an answering tool, something that they can ask it and it will give them the answers to use it as quite a sophisticated learning resource. So they're using it to deepen and broaden their depth of knowledge by interrogating it and exploring concepts that it suggests that they're interesting to them. They use it to develop their own revision materials. So they use it for things like develop a revision quiz and things like that. So it feels like we're starting to transition into a new phase. Student concerns, they're concerned about things like their information literacy skills. They don't teach information literacy very early. In places like the Netherlands and some of the Scandinavian countries, they teach it at primary age. So data security, they're concerned about safeguarding their own privacy and also IP and ownership. If they're putting things in like they're creative students to work on their own photography, for example, who owns it if it's gone through a generative AI tool. They would like staff to use these tools, but if they are using them to create things like lesson materials, they would like transparency. But they're keen that staff do use these tools because they want staff to feel confident. They want to feel they can ask staff questions and get confident responses. They're concerned that there is no over-reliance on generative AI because they do want to continue to learn. They want to make sure they do have intellectual development in their learning. And they're really concerned about the future and the skilletment and the potential impact. And just one of the things they're concerned about is digital inequity. So just looking at things like free-chap GPT versus paid-for-chap GPT and the difference is $20 a month. It's quite a high cost, really. GPT Plus has guardrails, so it doesn't produce some reasonable outputs. It's up to date. You can search the internet so you can get a lot more information you can't get on the chap GPT. And it has many more plugins. So it's really good for some educational tools. You know, I think like maths, coding, STEM, really good for those kind of things. And it can now do things like analyse documents. So we're starting to see a potential widening that digital divide, which is no good thing. And then just expanding that to include things like accessibility. I doubt any student would need all of these tools. But you could see that a student might need GPT Plus and Grammarly, which is 26 pounds a month. It's quite steep. You know, how do we balance this? So student needs, what are they looking for? They're asking for more information, digital literacy, employability skills focus. They want, they would like institution recommendations on what tools to use and how to use them. At the moment, there's a lot of un-clarity from students where they, even where there are policies, they're like I don't really understand them or it's different in the here to here. So I don't really know what I'm supposed to use and they really would like very clear guidance and policy. They would like practical training. But they'd like to understand the future as well as the now. And they see this important. They're really keen to contribute to the conversations. Then just looking at that thing about student concerns and skills of the future. We're starting to see a change in the employability skills we've talked about for a while and we're starting to see things moving to employers value in things like analytical judgment, flexibility and emotional intelligence. We're starting to see a focus on things like curiosity, creativity and critical thinking of the human skills which are going to be important whatever the vocational area and whatever however the AI develops because these are skills that AI doesn't have. Intensically human by nature and they're getting increasingly valued by employers. Analytical thinking people with creativity according to the World Economic Forum. Leadership skills people who can influence and work well with others. And then I think importantly workers who will need to constantly gain new skills throughout their work in life. We started to see that cycle of moving through jobs speed up. Really we need to be producing learners who know how to learn and like to learn and that's the best way to support them to thrive. And on that note I was going to stop for questions. Just leaving that QR code up for a minute just because that's the way to join our main and missing and also the link to our website which is where all the resources I've mentioned are they're all on the web page and then I'll flip back. So I was just going to stop for a moment for questions and then start again on activity. Are there any questions Mike? Yeah I mean lots of people have this idea that you know think, share put something in edit it, do something, discuss it but you can ask the AI to do all that you can say create something then you can criticise it you can give it a rubric to criticise it you can give it a persona and say how would this person view of it and so you can do the whole thing with AI and pretend it was done by students. The thing I would say to that Mike is if you're determined to cheat you will cheat you know you can currently do that before AI came out you could use things like SA mills you could copy you know part of this is about whether we trust our students or not and knowing our students to allow if students want you know my discussions with students indicate most of them don't want to cheat they want to learn they understand that cheating is cheating themselves I know that's you know probably not the answer you were looking for but I think this is not news not a new problem it's just a different lens on it and it's the same issues if I'm honest that came up when the internet came out and people started using Google and what I'm saying is you can get around that you can get them to record what they're doing and submit the whole thing so you can see the interaction with the AI and how they're prompting and what's coming back. Wow I don't think that would be very good for student wellbeing but yes it's an option Victor your microphone is okay he said it'll pop it into chat for you Sue okay whilst Victor is writing that somebody popped oh he's popped it in now I'll read it out loud both the recording so you can hear it just wondering if there are plans for a research pack to help institutions to learn what their students and staff think we run a survey and a focus group within my department but perhaps no need to reinvent the wheel so we're currently running another round of student discussion forums and we'll update our student report in the spring so I think there's various of these activities happening and I know some institutions have been doing similar but there's a thing about pulling those together that might be something to look at somebody above also asked a question specifically it's about like the surveys Rory said how have you gathered these views from learners and are they FE, HE and how many have you gathered so they were a mix when we did them before we gathered them by talking to students so student discussion forums a lot of students in a room and having a chat using Miro, things like that that's how we're redoing them so we're currently in the process of running the whole series, college and university students so students are anonymous and everything's aggregated is that recent or was it we did them earlier this year in produced report we're now doing it again Rory to update the report in spring so both fantastic thank you very much Sue I'm going to stop the recording now Horace okay