 Hello everybody. Clicker doesn't work. We're going to start with a presentation. It's not going to be too long, not too much listening to me. And then we're going to go into workshop mode, hence the post-its and paper as, you know, to be expected with any kind of just workshop. So we're from the National Centre for AI and tertiary education. We're also doing its short titles in GISC. We're aimed for catchy. So we were set up about two years ago with the aim to accelerate the adoption of artificial intelligence across the tertiary education sector in a responsible way. And that last phrase is key really. We're trying to take that approach, that pragmatic, practical approach, but also not suggesting people should just leap with abandon into embracing all things AI. Definitely they should explore and try it, but also to make sure that people are aware of the negatives, the pitfalls and can approach it and make considered decisions. So in order to achieve our aim, we do four things. We run pilots, we provide information, we run some events and attend an awful lot of other people's events. And we have quite a large and growing community. We're going to run through a few things. So apologies, some people may see things they already know they've already seen just briefly go through understanding generative AI. Then look at some specific challenges around assessments, meeting student needs. Then just look at some of our pilots and then we'll move into the workshop. So generative AI things have moved quite quickly over the last 12 months and amazingly quickly over the last six months. We have a 2022. We have chat GPT. And that's when things started to move. But generative AI is more than chat GPT. GPT. Struggling with words is the heat. So this is just an illustrative section. There are many, many more tools out there. But now we can use generative AI for chats, for search, images, coding, writing content, amongst other things. Just so it was interesting to show the timeline. We have GPT one in 2018. Things moved relatively slowly, relatively normally really for technological advancement. Then 2021 we have Dali. We started with the images. And then 2022. We had a lot of growth in the images. Then we have GPT 3.5 and then chat GPT, which is when the explosion started to occur. 2023 the first six months of 2023. We've had all these and more GPT 4 is paid for just recently announced in the last two weeks now has enterprise license available. So makes it more acceptable has more GPT 4 has more guidelines. I come on to that later. This slide was made on a day in July. In which 27 new AI applications were added to future media. Just as an example of the speed of growth, the pace of growth in this sector. But it's also getting integrated into the tools we use every day. So things like Microsoft Copilot and Google workspaces. It's coming. I mean, I'm sure everybody's seen the videos from Microsoft Copilot. I can't wait for it to come. It almost feels like magic. You know, the idea that you can go into a team's meeting and it's going to just let you know the actions and who's going to do them afterwards without. You write them down on a piece of paper and then being in a different room when you want to know what you're supposed to be doing. AI in Google Docs. Now, if you're using Google Docs, you can choose to write it or you can ask it to help you write. It will suggest in the same way, really, really simple to use. And you choose. You can toggle it on. You can toggle it off. Wolfram Alpha is now being added. So that's a really good tool, particularly for mathematics students and lecturers. Subjects I'm really not going to go into know nothing about. Gamma. An example of a tool that's not necessarily one of the big ones. It does presentations. It does web pages. Does it really well? Just get back to that. It works two ways. You can either ask it to generate a presentation or you can provide the content and ask it to do 10 slides. I was doing one of these in July and admitted I did this at 8.30 in the morning, so it was probably a little bit quicker. I asked it to write me a PowerPoint on the employability skills students would need in 2030. It's nearer I used to work in. It's nearer I know and understand. So I thought I could review it accurately. It took 30 seconds to generate 10 slides, diagrams, images, way more snazzy slides than I would have made in four hours. It first of all gave me the outline. I thought that was good. I accepted that, but I could have changed it. And then the slides came and it took me 20 minutes or so to review them to make a few changes. But then it was good to go. So start to finish it took me probably 45 minutes to do that, which probably would have taken me three or four hours sitting at my desk. I would never have taken what it gave me and just walked into a room and presented it. Because that way lies for failure, you know, the human, I think is always necessary here. But it's a really good, you know, these tools are just really good at short cutting some of those tasks that we have to do that comes up time. Sometimes, you know, I can just sit at my desk rage is trying to find the right image. So, chat to PT in four points. It was created by company called open AI, which contrary to its name is not open, it is commercial, but other competitors exist. It's training on large chunks of the internet, plus some books, plus content that we continue to give it humans help with the training by providing feedback. If you notice it'll always ask you how you feel about the responses, and it works quite simply by predicting the next word, given the sequence of words proceeding. It can write plausibly sounding text on any topic in almost any language. The key to that is plausible sounding. If you've never tried it, when you try it for the first time should always try it with something you know something about, because then you can tell how truthful it's being. It can generate answers to a range of questions, including coding, maths type problems and multiple choice. It can also generate you multiple choice questions on content as well. It's getting increasingly accurate and sophisticated with each release. And it generates unique tests, text each time you use it. So when people talk about chat GBT passing an exam, you sometimes wonder how many goes they've had to have for it to pass the exam and if they then tried again to pass exactly the same exam it would probably fail. But it's greater to wide range of tasks like text summarization. It's really good for improving your writing and getting feedback. I use these tools quite a lot because I learned to write a lot. A lot when I work for the civil service I was taught to write passively. I find it quite useful to use these tools to make my writing more engaging. Or I might put something in and say just generate me for tweets out of this and it does it. It's got limitations. It can and often does generate plausible but incorrect information. And this will include false references and false claims about its capabilities. So if you output some tests, text in and ask it if it wrote it, it's highly likely to say yes, whether it did or not. And it's only trained on information up until September 2021. That's the free version not the paid for. So concerns. It does produce biased output. Culturally, politically, probably any, any way you can think of. It can generate unacceptable output sometimes. It has a very high environmental impact. Lots of concerns around human impact on the ownership of the material that people are putting into it we don't quite know. And this danger of digital inequity. So assessment. Is how the media chose to focus on it. Just the selection, rather than helpfully. They chose to focus on the fact that it was killing education because it was making essays hard. Regardless of the fact that large space of the sector have been pressing for assessment to change for the last 12 years before generative AI came out. The best we can only hope that generative AI might be an accelerator to make that change happen quicker but it's a change that most of us have been asking for for years. So tactics for assessment is basically three. Avoid outrun or adapt and embrace avoiding it. That really means just sticking with the Victorian model of sticking people in horrible rooms, usually cold in my experience, making the right at teeny tiny desks and get rice as cramp. I spent more time teaching myself to write for my lesson lesson exams and I almost did teaching, you know, doing revision. Outrun. Not something we advocate. There's always going to be much more effort put into beating these detector tools than there is in creating them. So it's just going to be a constant arms race. Adapt and embrace is the is the kind of message that we are promoting. Embrace the use of AI discuss the appropriate use include students in those discussions and actively encourage is used to create authentic assessments. But just explore use have discussions making form decisions about how when it's appropriate to use it. It's not always the size will be shared afterwards so there are a couple with quite few links on. Literally hot hot off the press. There's a set of cards that we published last night. Based on the top trumps for assessment so this is assessment design in an AI enabled world. The ideas for how you can change various assessment types. It's an interactive PowerPoint. And we're hoping that will be useful to help people to tweak and change their assessments for the immediate year. And there's some really good QAA guidance out on how to adapt your assessments. So is it possible to detect say I kind of sort of but not really all of the systems today are not that good. The three techniques writing style classifiers watermarking each technique can be defeated each will give false positives. And there are a lot of data processing and contractual concerns, you know, where's the data you've submitted going how are they using it, are they using it to train the model up, you know. We don't know often that kind of thing is hidden in contracts, but even if you've got a model that says it's 97% accurate. That's an increase of 2% on the human element. And that's a huge workload issue for people to suddenly deal with an extra a tripling of queries. And that's if they're 97%. We run a series of student forums over the spring. We spoke to students in both Effie and he we spoke to students anonymously. We asked them what they thought about AI and assessment. Most students, you know, they're already using AI. They think it should be allowed in assessment, but they agree that it should be used differently in different areas. Some of them came up with some quite creative suggestions, you know, for, for using a kind of bookmakers model of allowing a certain amount of credit for use. It was mainly English and Effie students plagiarism detection. They wanted clarity over what's allowed what isn't allowed they believe detects can be beaten and quite frankly they were really quite vocal in the dislike of all the focus on plagiarism, which they were taking as thinking that their tutors, their academic staff were implying they were all cheating. They were generally against automated marketing, but thought assisted marketing was much more acceptable. There were very in favour of assessment changes, and the students were quite clear that they thought the focus of assessment should be more around their creativity and critical thinking. And they really wanted to move away into a different kind of teaching, less sage on stage, more practical, more involvement, more real world and include use and potentials of AI. So way students are using chat GPT at the moment. So this is this is not, this is not to say we we've decided that this is where the line is but the line across what's acceptable, what's more acceptable or less acceptable. That's the discussion all institutions need to have and make their decision about where the line lies. It may be different in different institutions, maybe different with different subjects, different levels, but that's really important discussion to have. It's really, really important to be exceedingly clear to the students to make sure they know where the information is that they can find, because so many were saying we don't have any guidance and I knew they did. But if the student doesn't know it's there, then you might as well not have it. And it's really good to include students in some of those discussions. Just to go into one of those why are students using chat GPT for things like writing or translation. They can't just do the task, they can have a conversation with it. They can change the reading level, they can simplify change the voice. Lots of particularly students that don't have English as a first language were saying they were using it to get a better understanding of what was being asked of them. And, you know, they can do that they can have that conversation with a tool 24 seven. They don't necessarily, you know, click that they don't probably understand when they're in the right space to have that conversation with their tutor or whoever. And it's the fact that they can just keep going back. So they just get their understanding, and they don't feel that they're pestering forever. Student concerns to have quite few. They're very concerned about information literacy. They don't feel that they necessarily have the skills to make those decisions for themselves. They concerned about data security. You know, what's happening with the stuff that they're putting in and they're using these tools. How do they safeguard their privacy on their ownership, you know, if they're art students or something. They're using some kind of tools they still want to feel that they own their own work. But as we talked about that plagiarism and ambiguous ambiguous guidelines. Regulation for students, they thought it's about striking a balance. They were very, very keen on the kind of high level governmental regulation around big AI. They were let much less keen on strong regulation in the institution should maybe expect that. Staff use. Lots of them are aware that staff, the staff were using it. They wanted the staff to be transparent and open and also confident that they could ask questions and go to the staff to get help. Very concerned about access and affordability. And also equity. So people that were talking about things like, well, I can do this in my institution, my friend over the road can't talk to each other. So they're really keen that this equity across the piece over reliance. They're aware that there's a balance between balancing their use of AI but also still continuing with their intellectual development. And they're really concerned about the impact on their future. The changes it's going to make to employment and jobs. So what they're asking for is more information, digital literacy, employability skills around AI, practical training in how to use it responsibly. They also want to understand the future potential where it's going, what they should be thinking about, you know, these are students they see that they've got a long future and want to see where it's going. They want recommendations of tools they can use and how they can use them. They also would like the very concerned about ethical considerations, they want more clarity, and they want it to go beyond plagiarism. They're very, very concerned about sustainability, human impact, you know, very concerned about the fact that the human impact has to be concentrated on the less developed countries, you know, we're here reaping the benefits but other people are being exploited to create these tools. And discussion. They really want to be involved in the conversations. Please bring them into the conversations about what's happening. Just to illustrate digital inequity in inequality. Sorry, I'm struggling with some words today. And it's just I'm just hot and I'm dry. So financially, there's a difference of 77 pounds a month, if you were to have, you know, Grammarly. And that's an assistive tech, you know, it's not something you should have to pay for mid journey, chat GPT plus as opposed to the free version. And there's a real difference between the paid for and the free version, the paid for version is up to date. It has more. I've forgotten the word as well. It has more. Go on, it's just gone. You don't get an acceptable content. You don't get as much bias. It's a much better model. It also on the new on the enterprise license, the data, if you have the enterprise license, your data is not used for training. It is stays and remains your own data. So the more you pay for it, the better service you get basically, which obviously is going to generate a huge amount of digital inequity. There's this basically, I think this is this is just really about expanding something we already know exists. The poorer the students are, the worse access they have to tools, the more support they need. We need to support our students in a good way. Some of the pilots were running them trying to speed up teacher magic, they've got to stand out there so I speak too much about them. But what they're doing we're running that pilot at the moment we've got some eight colleges trying this for six months. And what it does is make it much easier for staff to create content and resources for their lessons grade we've completed this pilot so we've got the report up. This was a tool created by University of Birmingham students, and it's assisted marking for STEM subjects. It works particularly well in large for large numbers of classes because it learns as you go probably use about 50 or 60 responses before it starts picking up and saying you've marked this question in this way would you like to accept the same kind of feedback body swaps. That's being used here for the VR demonstrations of anybody's been over there. This is a intersection of VR and AI, and it's really useful for soft skills, helps the student put themselves in the position of the other person, and it's also been used for medical and things like that. This pilot we've just recently finished. That's an audio learning, AI generates learning resources, small podcasts from your own existing content so it just gives you another way of learning from something you already have. So, that's enough of me. We're now going to go on to our workshop. Hello. Good if we balance the tables out a bit more so a little bit of movement around. Five ish per table if we can move around to that. You'll notice that there's a big sheet of flip chart paper on the tables and that's marked up near term, mid and long term, and then this post it's and this sharpies. What I would like you to do is to keep in mind the conference themes of diversity and inclusion and sustainability and social justice. Map out your hopes for education in those three areas. And we've deliberately called them near mid and long, so that you put your own definition on those, what it means to you, but to do that in groups, but then also to think about how where AI can best support them, or if you think it shouldn't. So you can do those tasks as one or as two separate levels, but if you can do them as discussions as a table that would be brilliant. And we'll circulate around. Okay. Is that on though. Hello. Doesn't sound like it's on. No. Hello. No. Did that work. Hi everyone hear me. Hello. So do that thing where you bang on the table. Okay everyone we're going to feedback now. I've got no idea if the microphone's working. So hopefully online maybe still hear me as well. So the last bit of the session now so that we can come together and talk about what we've discussed and in relation to those themes. So I'm going to be picking on groups and hopefully all nominate a spokesperson. I'd like to start over here actually with this group because I know you had some really good conversations going. So I'm going to start at near mid and long, which we purposely didn't define just to make it a bit trickier. So I was wondering if we could start actually with some of your, your near term thoughts on AI and what your hopes are. That's right. Yeah. And that was really like students and staff. It's really important. And upscaling is easy. Yeah. It's great. Yeah. Oh no, I like that. Certainly brilliant. Thanks very much. We'll move on. Do you guys want to talk about the long term? Sorry, that's the worst one. Tell us the future. We have a discussion. I'm reminded that this quote, anything in the world when you were born is best the way it works. And if you invented before you are something like 30, it's fantastic. And if you invented by the time you were over 40 is the devil's work. So we did talk about that. I think it's that you asked us about this. As certain things become automated by AI, we will value the human skills more. Some things that will be outsourced in technology and then the tools around them for the human connection and so on. We might compare students better with AI. We might have a more effective look at what I have said before. Because he might say that we're really not genuine. And we might have a more authentic assessment of what we've been saying about for a long time. That was a good story. Yes, that's brilliant. Brilliant. Yeah, I think everyone's feeling that on assessment as well. Lee, shall we go, I'm going to switch around to the back right there. I will let you pick whether you want to do near mid or long term. And we really provide to be used as a part of the answer. And we also discuss that this is an explosion that we look for the long way. But yes, whatever we get at the end, that's final. Yeah, I think I love that. Definitely with that genitive AI boom this year and everyone is so focused on the tools and less about how we're going to use them really to promote our humanity as well. And something for us to think about soon. Yeah, brilliant. All right, and we've got a couple of minutes left and so straight down there, would you give us something in the short term that you're hopeful for? So we look at what's the core of learning different. Again, the short term is access to and can't integrate with us. Yeah. It's a really good one that you want to share. No, that's great. And last group there. Again, I'm going to give you the choice. Yeah, brilliant. I love that there's been so many mentions around accessibility and neurodiverse users. And that's something that we've looked at a bit this year and we really want to focus on, particularly in this coming academic year. So it's good to know as well that everyone is thinking about it. Thank you very much everyone for for coming and working together. I'll just leave us with a couple more links for our resources and the QR code will take you to our National Center for AI webpage. You can get to all of our reports there and we've just put out our third edition we do a yearly report. Nice really big reports looking at all AI and education. And we've got links as well to explore AI where we've got a series of AI demos and we've got those are stand outside as well if you want to come along and play with those there. And lastly, do stay in touch with us. So we have a gist mail if you aren't on enough gist mails already have another one. We do a nice monthly newsletter there that Sue sends out as well to keep you updated on those resources, and we are building a community as well so that we can keep in touch and hopefully we're going to be running some some monthly sessions as well this year so that we can keep in touch online and keep those discussions going. Thanks for so thank you Sue and Helen for leading us today I think. This is an area, we're saying in our group, it feels like staff and students are maybe at a similar sort of space or, you know, place in this whole AI. AI world that's kind of emerging and those of us in the room I guess are trying to stay a step ahead and think ahead and we don't really know I think we've kind of, we have this struggle in our group about dystopian futures and utopian ones as was said but it's going to be interesting to see how it all pans out so I hope we can continue these conversations I think I think it was really interesting yesterday to everyone the students who actually was being prescribed one of these AI tools and I think it's a great scholarship to kind of do exactly what you were highlighting one of the slides Sue about helping to sort of digest some of these complex articles and things. So, some of these tools are already making a difference in terms of, you know, inclusion and equality and helping those students. That's how it all transpires that thank you everyone for your participation, and thank you to Helen and Sue for leading us. We'll just get an appreciation.