 All right, I'm going to make a start, it's 3.30 for me and welcome back, hope you are all watered and in comfort in this hot day. So this is our session on emerging technologies stream, and it's my pleasure to be introducing the Durham session. So we have first we have Paul Finley, representing also Matthew Wood who's not here today to talk about providing guidance and support to academic staff about the challenges and opportunities of generative AIs and LLMs. And as a side note, as a chairs prerogative, I'd like to tell you a little bit that in Southampton, where I am from, my name is Adriana why by the way, I'm going backwards. That was one question that we pulled to our lectures perspective lectures, and it's very interesting the variety of perspectives that are the fears and the excitement that is all across the sector. And he's representing in our new lecture so I'm for one very excited to hear what Paul has to tell us. So without further ado. Thank you. So hello, I'm Paul Finley. Offer the apologies my colleague Matt Wood who can't be with us today. But today I'm going to take you through this talk about what we did at Durham when AI suddenly appeared on the scene last November, and how we've tried to offer support and guidance to academic staff. And some of the lessons that we've learned over the past nine months or so. So I work in a department called Decad which stands for the Durham Centre for academic development. And I sit across two teams one on digital learning where I offer pedagogical advice to academic staff and about in academic development where I run workshops for academics whether they be teaching or research. So when the AI appeared, it was right up my street. And I think it was launched on the 30th of November last year, and I first became aware of it as a big deal on the first day back after Christmas holidays when Matt put a video up on YouTube explaining what chat GBT was. And this is on a teams channel and that teams channel exploded as people are going to start to go through equal. This is interesting. Look what I've done. Look what I've done. Really straight way I think a lot of us saw this as a challenge and an opportunity. And various people in my department did different things. So we put up some initial guidance on on internet pages. Our senior leaders start to work with the university exec to craft a position. And Matt and I suggested that we run initially a workshop just explaining to people what AI is what it could mean what the challenges what the opportunities. And I've got to say we ran that first workshop in March, and we've run it eight times since. So quite a lot actually one single subject. And as an educator, it has been a unique experience. And a bit about about me and before I would for the university I was a teacher in the stakes sector for 10 years. I've been an electron currently digital digital learning advisor. I'm a lot experienced standing in front of a classroom electrify I once calculated I've done somewhere between 8,000, 9,000 hours at the front. So when I say it's a unique experience. It's not exaggerated in fact it really was unique. I think fundamentally what that comes down to is I've never dealt with a subject that is changing so fast. This is the only workshop I've ever had to put a disclaimer at the very beginning going, what I tell you today is true. It most definitely will not be true in a year's time. And that was really a bit of a challenge because quite often I stand in the front room, and I know the answer or I know how to work out the answer wasn't always the case here. But anyway, back to my timeline. DOOM adopted an institutional position of embrace, educate and enhance being meaning we're going to embrace AI, educate our staff and students about it and enhance our practice. And as soon as that position paper went out, departments started to contact us going. What's this all about? Can we have a bit of support and guidance so another colleague learning designer by eco Candice and no one grant started to offer department specific workshops. That was a bit of a mix of the introduction workshop but also very much focus on assessment. And about the same time in June I ran a few pilots doing very specific workshops and how you can use AI for teaching and learning and using AI for research when all together since about March. We've delivered over 30 different workshops focusing on various aspects of AI which compared to most other subjects is a huge amount we wouldn't normally deliver on one particular topic that much in such a short time period. And we've learned quite a lot from that approach. And here are some of the lessons that we've learned. So the first lesson I think we learned was to keep it simple, especially when it came to the technical detail. Because when we initially did this workshop, we were like, okay, we had to explain how AI works. And my colleague Matt stood at the front, and he started to go into what neural nets are and machine learning and all that. And you got to remember this is the room of academics from across all disciplines. And the eyes started to glaze over a little bit. We didn't need to go into that much detail. So we tried something else. We tried to explain it as a input-output machine. And that didn't really work as well. It was just a bit too simplistic. We then tried to explain it as a ball spaghetti, which that kind of got away from us as well. What we eventually settled on was we just got our phone out, put it on the overhead projector, go in predictive text. You've seen this in action. You've used it. You understand the basis of it. And through that medium, we could explain how these AIs work. And that was quite a nice example for people to cryptograph what they needed and then move on to the stuff that they really cared about, which was how it's going to impact on them in their teaching and assessment. And when it came to their teaching assessment, the key thing that we had and we learned from this is we had to have our hook prepared. And with this, what I mean is there are so many interesting and wonderful things that you can do with an AI. In fact, there's too many. If I wanted to do a workshop with all the various things that we could do, it would be a very long workshop. So we had to target the things that we selected and shared with our colleagues. And some things appeal to different segments. This is where we were very selective in the things that we shared across the different workshops. So some things that got instant by buying interest were essay plans, giving it a piece of writing and rewriting it to match a given style. One that some colleagues loved, especially when they have a large cohort, was instant feedback. So giving your students at the end of the lecture an open document going, right, what are the three things that you want to work on next? And then getting AI to analyze that. Colleagues who have a large cohort and feel like they've never get to know their students love that idea because it gave them some instant feedback from those large cohorts. Developing your own tests, creating chatbots. So one thing I did just as a trial was to create a little chatbot. That was basically just had a module handbook put in it and it took care of all the basic questions that you might get about me admin and colleagues quite like that idea. Again, literature reviews and data analysis, all things that you can do quite easily and appeal to researchers and can be quite effective. And initially, when we did this, we thought about how to deliver this and it quite often would be me or Matt standing at the front of the room doing a demo, which it's okay, but it's not particularly engaging. So generally, when we selected one of these hooks, we tried to deliver it in one of three ways. First way was play. So we would do a demo, very short one, and then just ask people to play to have a go with it and see what happens. And that was really important because quite often when colleagues came in there into that room, they have heard of AI, they read about it in the news or the Guardian. But they haven't actually tried it much. And they didn't have any idea about its capabilities. So I remember one colleague when we did a bit on translation, one lecture in classics going, I'm fine, doesn't do Latin. And I asked him, are you sure about that? So I'll try it. And to his price, yep, it does do Latin. And he was like, oh, okay, this now affects me, I can see the point. So giving people the opportunity to play was really important and quite a good way of doing that. The next bit was debate. When we just throw up a question on the board going, is this a good thing? Is this a bad thing? How will this affect you? And people in mixed groups with people from different faculties, and they talked and debated it. And those were really engaging. You could see that walking around the rooms, people having really quite intense debates about this. And that was a really good way of getting by in about people to discuss this. The final way was adapt. So we were doing an example and then tell people to adapt it for their subject, their discipline. And that way, create a few people across from quite nice things that they've now tried out in their practice. The final thing that I sometimes struggle with was materials. So initially, Matt and I took materials from our own discipline. So Matt is a psychologist, I have a background in education. And that was okay. It wasn't brilliant because generally we took more things that people didn't have a great expertise in. So trying to find some materials where people had some common understanding was initially a struggle until we hit upon a few different ideas. One, we took a lot of material from our PGCAP course where a lot of our academics have to complete that if they won't get their fellowship of the HEA. So they had that common knowledge and could comment on it and understand what we were doing. The other module I found that was quite useful at Durham was we have something called scholarship in higher education that is aimed at first year undergrads. And it's talking about how to write for an academic audience, how to do things like arguing construction. A thing that an academic does all the time when writing articles. But people across all disciplines have a background in that and understand it and could see how that was used. So it's quite useful for that. So all together we would put these three things together and we created quite a nice delivery method for the different aspects that we wanted to talk about. And that's what we did in the kind of introduction of the chat up to opportunities and challenges. But after that we started going to the more department specific workshops. And in those we wanted to talk about assessments and we want to have that start a discussion with our departments. Because at Durham the departments have complete freedom of how they approach this it's not driven essentially departments are responsible for making up the AI policy. So every time we wanted to have a bit of background this before deciding what they're going to do. So we went into there and what we were aiming to do was to present all the various options that they have and to discuss them in depth so they have a grasp. Now, let's be honest. We have an agenda when arguing this okay. I don't want to say we want to guide people to the good decisions we want to guide them away from the bad decisions. So we were quite we tried to you know cover every single base going right first option is you could ignore it. Can't recommend that it's risky it's out there students are using it. So next prohibit. And again we look at this in short term and medium term going well prohibit. No real way to enforce this in the short term. We don't have AI detection tools that work. In the medium term. Yes, you may have heard of turn it in the same one from turn it in the room. What we've been saying is we have great doubt about whether detection tools will ever work to a satisfactory degree and make our colleagues aware of that. The next option invigilate now at Durham that very much means sitting in a hall sitting in a room somewhere and doing an exam with invigilates walking around. But depending on your own context that could be proctoring. And again we've said we kind of said that well that that could work could could be appropriate for you to decide. Next we'll go to well you could try to design around it. No what is can and can't do so one aspect that we picked on was it's not particularly good at reflective accounts because it doesn't have access to your own personal history. Now I'll see if you feed in enough prompts for your personal history, you can probably get something out but at that point you've done the task already. But for that you have to understand how the technology develops and also where the gaps close. So the example of this I had just on Fridays I had a colleague send an urgent message going, can you please talk to me now. It's like, okay, what's the problem. And she's like, I attended your workshop over some of what was great. And what I wanted to do was an activity where I put an essay topic in. And then my students and I will discuss how the answer is and how it's not very good. So I did this in June that seemed fine was quite a few weeks is and last week she was preparing for the new term, and she did the exact same thing. And the essay that came back she was like, it's too good. I can't do this with my students. It was no obvious weaknesses is like, yeah, that gaps closed a eyes are improving all the time. And that's going to be a challenge if you're going to try and design around it because you tried to plan for January going I won't do this. And you get to January and that's something that gaps closed. Yeah, that's gonna be a problem. The next one was to encourage the use of AI in your teaching practice in your assessments. Now, in the short term out to principal concerns are equity and privacy. So that you can pay extra to get chat upt for is it fair that some students will have access to a better AI than others. No, it's not fair. It's lately not fair. So that's a problem. Privacy. We don't know what these companies are doing with your data all the time that so I feel a little bit of worry. In the medium term what we were saying to our colleagues were was, you know, along the lines of at some point there will be institutional licenses for this kind of thing. The best guess is that Microsoft will release something at some point where it's integrated into office where we'll have to use it, but that'll be coming down at coming down the road. So be aware of that. And just be aware that you'll have to be at the top of the kind of innovation curve you want to use it. And the final option, which is the biggest one was actually maybe we need to free think what we're doing completely and not just be welded to this idea that assessment is something that takes place at the end of the course via an exam with that be open book or close book and short term you can't redo that because it takes a long time to think about it. And in the medium term, there's going to be workload implications about that because you're not you're doing a complete design redesign of what you currently do. So you need to be aware of that and be aware that it could come. So in this slide we very much go through the kind of practice of it. Okay, this is where we are. But for next, we have to marry that with the practical. So at Durham, we have a well, these are all lovely ideas but what's that mean practicality. And in our case it means changing our modular lines which are done on a very long time scale and have these very carefully worded. We're giving our colleagues an idea of what involves and also giving a little bit of ice because what we didn't want to do was to write a very detailed and description because if these things change, it's not. AI changes it's very hard to change these things in our institution and that will vary for different ones. So we also married the practical with the theory to see what you needed to do. After all these workshops. So we asked for feedback as our standard practice, but because we've done such a large block block fees we want to get a bit more details feedback. So at the end of the academic year we sent a survey to every single person who had attended any type of AI workshops, asking for their feedback. And what we found is yet they like the general overview they found that was helpful. They enjoyed the debate and the discussion. That was good. And the aspects that could be improved. You're like, there's no grand theme there's nothing one nobody one big thing which was kind of disappointing because that would be a really easy thing to fix I would have loved that, but it was very it was very department specific. And when asked about the next steps they were going okay, we have to use department assessments. And I'll be honest when I saw this set of results I was like. Yeah, I could have guessed that was what people are going to say. It wasn't that interesting to me because they didn't tell me that much that I didn't know. But thankfully, we did ask a few more questions. And what I found fascinating is when we asked them about how confident they were with different aspects of AI. And I was quite shocked by the results because I've never seen such a set of results where the orange is the unconfined and. This is my key. Okay, it's in the middle and orange is not confident at all. I've never seen that from academic audience usually academic audience quite you know confident in a thing but we saw across the board there. How uncertain people are about using AI is even after attacking attending AI workshops. So to me this tells me there's a lot of work to do we can see that when it comes to using AI tools and research. There's a few be here, but yeah, mainly in content, design and student assessments. Yep. Identifying AI generated content in students assessments. Rightly, I think people are uncomfortable in that and know that they can't do that reliably. Marking assessments are using AI again people aren't sure about how to do that. So we're starting to see what our priorities are for the coming years. One thing we never actually touched on a lot was actually using AI tools to mark assessments and that's going to be, I think a topic that's going to come up a bit more and more as the year goes on. And again, as you can see here. Learning using for learning tools and content the one thing where we actually start to see a bit of confidence is where people are starting to think, oh yeah this is going to have an effect on learning teaching outcomes I need to think about that. So I thought that was quite an interesting set of responses from that area. And from that set of responses, you start to see some of the areas that we're going to be working on. So over the next year, our plan is to work on a few different areas. One student guidance, because we haven't done a lot of student guidance at all. We haven't explained to students what they can and can't do what they can or what's the possibilities for them. We're going to run a recurring set of free AI workshops, one on learning teaching, one on research, and final one on prompt engineering to try to upscale our staff in those areas. And finally, we've been offering grants to different academics to try out new approaches in their disciplines and gather evidence over the next year. When it comes next summer, we can review that and try to see what works and what doesn't and plan ahead on that. So with that done, I'm happy to take any questions. Thank you Paul. I'll come to you. Thank you very much. I was wondering, very interested in the fact you said you'd enable them to do a bit of a play session. At the beginning, how did you navigate the kind of all what tools should we use? Because I think a number of institutions, I know this is something we're wrestling with. We've got some advice to give, but we're kind of having to caveat it all in terms of, well, we don't really know what to suggest. We're certainly not going to, because we know what the privacy concerns are. We don't want to kind of engender this trap, you know what I'm saying? Yeah, absolutely. There were a lot of caveats. The one thing I would say, there's a rule from anything you put in there, assuming you've lose control of it. So just don't put anything, anything sensitive or data you don't want published. When it came to individual tools at the beginning, it was very easy. It was chat. But now, before that, I recommend in our chat. Microsoft Bing, which is pretty much the same Claude to and Google barred. And the one I found easiest to get more stuff set up on is barred because everyone has a Gmail account. It is caveat timely going. Yeah, there's some concerns here about what's going in and what's going on. Any other questions. So I will take, I'll give the mic to you. Sorry. Just as somebody who's got some responsibility in creating guidance for students from where I'm from. Have you encountered any resistance or concerns about providing guidance to students from your staff? Yes, just anecdotally, there is concern because the debate we have is do want to teach this because the worry for staff is the students will use this to cheat. But our policy is that students are going to be upskilled on this and embrace it. So it's trying to, you know, go at the same rate of staff and develop their courses as well. But we haven't got a, I think a definitive answer about how the best way to do that is and still an ongoing debate. I had a question down there as well. I missed that. Sorry. Oh, okay. It was similar. Okay. Any other questions. You guys are making me run across everywhere. With the concern about, you know, of the students using it for nefarious reasons, but on the flip side, the point you mentioned about, okay, but how about embracing it to help with the learning? Is there any concern that students will get wind of that and then start to go, well, hang on, what am I paying for if you're going to start marking my materials? I mean, you could then say, you're trying to cheat to do your assignments, but I was wondering, how do you see that being navigated? That argument came up quite often and where it always seems to go to is, I think we thought about, well, what do you want out of university? Why are you here? What is the point of an education if you can just do it all by AI? So what are you getting out of it? And I was very much advocating to people and they would bring that up going, well, we need to talk to students about why they've come to university and what they want to get out of it. And that's something we don't actually do that often. They just come here, sit in the room, we lecture them. We never have that discussion about why they're sitting there and what they want to get out of this experience. And I would personally advocate, you need to have that conversation and say, it's about the skills you develop by doing this work. And that's what you're getting out of it, not just the degree of concentration at the end. It's the hard skills that we will use in the world work later. Another question here. Thank you. It's just a follow up comment because we're we're planning to unleash our student sessions and our staff guidance in parallel. And that's exactly that's the consistent message. I think that the golden thread between the two is to get students to understand that they're important employability skills. But don't become over reliant on it because you're paying for this premium product which is developing your criticality skills. Just as like we can't stop them from using the essay mills if they really want to. They are ultimately just selling themselves short by do so I think. I'm going to take the last question as a prerogative. One of the things that we have been discussing in South Hampton is how this can be an opportunity for students of whose English skills are not great. And also for us who have to read their, you know, their products, you know, their way of thinking so it actually can be something empowering in terms of if usual if the assessment is well designed. They produce something readable that actually represents their real faults and the real. So I kind of wanted to hear from your perspective is that if that what elements of assessment design we need to take into account for for a LLM particular being empowering rather than something that's used to cheat. Yeah, yeah, so when we did the teaching session went through the different scenarios in which students use this, and there were the entire scenario where they just get to write the entire essay. There was no you wouldn't get to rewrite the style the essay to improve the communication and communicate their core idea. I thought that was very powerful because when you present that to people they like I can see the point of that it just it helps improve it and that's a valid use of it. But I think it's having that discussion with people and saying have you thought about that are you happy with it. That's majority of people have had a discussion with they are happy with that approach and can see the use of AI in that context. Thank you very much.