 engineering degree for us. Ooh, so yeah, happy to, oh, there's lots of questions in here. Okay, so yeah, thank you so much Fiona. It was a wonderful, it's got followed on very nicely from the previous speaker. And I've been sort of writing some comments about the similarities and themes there, but I'll open the floor now for questions. We have a few minutes for that if anybody would like to take the mic or put the questions in the chat, please. Thank you. While people are typing, you mentioned that you used a different framework for the strengths in, is it Clifton strength or is it something different you mentioned? It's called both Clifton and Gallup. I think at one point Gallup took over Clifton. Uh-huh, okay, okay, yeah. Are there any challenges of, you know, do you find using that to do students somehow find that, like why they're having to fill up a personality type form or are there any other difficulties that you face so far when using those tools? Yeah, so yes, student engagement is an issue. You know, as I said, some of our students, you know, they've come to do some hard maths and some science and you suddenly go, can you take this? Something that looks like a personality test or could be argued to be a personality test, they don't quite see where it fits. We also, because we work a lot with, I work a lot with first year students and I'm sure colleagues will have seen this, they come in being very marks focused because obviously that's how you pass your A-levels. And so that, you know, if it's not got marks attached, it can occasionally have a bit of a kind of, well, why am I doing this? But we also find that students can on occasions, they sort of get, one of the reasons we went for the Clifton strengths rather than Melbourne is we did find that if students were given a role, they sort of assumed that that was their role forever and ever and ever and they could never be anything else for some students. And I think possibly particularly those students who are second or third, you know, English is their second or third language. And so understanding the nuance of what we mean by Melbourne and how it works can be a little bit tricky for them actually and being able to have that conversation. So yeah, there are a few, we have found a few kind of issues with it but overall we found it a little bit more useful for what we wanted it for. We don't use it for group formation for example. We tend to use random group formation although we do make sure we've not sort of left one of our female engineers in a team with kind of nine other men. So we do try and make sure that we're doing a little bit of that as well. Okay, no, thank you so much. If there are any further questions please post them in the chat or feel free to take the mic. We have maybe a couple of minutes or maybe any last questions for our second speaker, Fiona, if there is any in the chat later on and I'm sure Fiona will be happy to know if there is something. Yeah, could you say a bit more about accessibility issues that you might have faced? Yeah, so one of the accessibility issues we actually have, UCL is nearly 200 years old. Some of our estate wasn't built with wheelchairs in mind for example, you know. We're also in one of the issues with being in the middle of London because it's very difficult for us to build new staff. You know, we'd have to sort of start knocking down the British Museum and they don't really like that if you suggest that to them for us to expand. So in terms of accessibility, it is really tricky for our students to actually necessarily just even navigate the estate. And that's something we have to really be aware of. And we also, we do a lot of work with our virtual learning space. We use Moodle as a virtual learning space. So we make sure our online content is very accessible. We do stuff like we ensure our slides are up or as early as we can. Beforehand, we try and record sessions although I would say with teamwork it's very tricky because certainly so much, you know, we sort of have, you know, we have lots of recordings of rooms full of people talking to each other in teams which is not necessarily particularly useful for our students. And so we are really trying to understand also what would be helpful for our students. You know, we, is it useful, for example, to have an extension on coursework for students? That's one of the ones that we are working with at the moment. But also how do our students use their extensions with coursework? Are they always using the full week? Because week is a kind of standard extension we have. Are they really using one or two days? And how do we ensure that they're not being pressured into using the full extension by the rest of their team because they've seen that they can get more time? So it is really interesting in having those conversations with students about how they have those discussions. So it is really tricky to, you know, we've got certain issues around kind of how accessible we can make aspects of it. As I said, you know, we try and make sure that we're in places that are accessible states-wise, but it's not always possible necessarily. And trying to provide students with what they need in terms of workspace. But again, we are resource limited. So that can kind of become a bit of a tricky issue. So, yeah. Okay. Thank you so much once again. Maybe have a round of virtual applause for Fiona. And we can slowly move on to our next presenter. Maybe while we're setting up, I don't know how ready we are. Let's find out, Kath. How are you doing? Our next presenter, Kath Klamber. Hi there. Hi, yes. You're ready to go or with the slides? Yes. Okay. So super, super. I just wondered, Manish, do you... I can't share my... Can you see my slides at the moment? No, not at the minute. Okay. So I've got them sharing on my screen, but I think it's not going to work. So is there any way I could ask you to share the slides for me, please? Yeah, I do have them. So I could do that. Let me just try that. I did download them. And if in the meantime, somebody wants to go and grab a cup of coffee or just stretch your legs, feel free to do that. While I'm just finding the slides, there we are. Hi, I'm not sure if... Can you hear me, Manish? There's an echo coming back from one of your devices. Maybe you've got two applications. Yeah, that's what I should do. If I can find the PowerPoint. Oh, there it is. There it is. Can you see it? Yes, I can see it. And Manish, is my audio functioning okay now? Yes, yes, perfectly fine. Oh, good. Okay. I'll see if you could go to full view. So you're looking at what the presenter view or not the... Presenter view, if you can do it. If not, don't worry. I'm happy with that. Yeah, I don't know why. I've got two screens here and it's gone on. Let me just take the other screen out. It might just fix the problem. It might give me some other problem. Okay, don't worry. And if you're happy with this, we can continue with that. I'm fine with that. Hopefully people can see the text easily. Okay, so just give me the queue whenever you want me to switch on to the next slide. I'll do that. Okay, thank you so much. So I just want to say thanks for the invite to speak today. My name's Kath Core as well, and I'm an educational developer at Central St Martin's at the University of the Arctic. And we're going to talk today a little bit more openly about compassionate approaches to feedback. And also within the context of the post pandemic, as Fiona said, many of the themes that Fiona, you mentioned are coming up at my university as well with large numbers of students. But I've got a quick question for everybody. Would you go to slide two, please, Manish? I thought we might have a little bit more of a conversation today, if that's possible. It's a bit ambitious given my university computer is blocking so many pop-ups. But could you go to slide two, please, Manish? Is it showing not already now? It isn't. Give it another nudge and it will go. So maybe you are... There we go. Okay, so you're still looking at the PowerPoint rather than it's... Okay, sorry, yeah. Okay, no problem. So I just want to ask everybody here, you know, please feel free to put your cameras on if you want or pick up the mic. But given it's May the 4th, some tough love out there, I just wondered, does anybody remember a teacher who was particularly tough on you when you were learning, when you were growing up? My English teacher, I'll give you an example, told me I was a terrible writer. And so for many years, I thought I was a terrible writer. So we use this... What do you think? Would anybody like to put a word in the chat or use an emoji? Thank you, Lucinda. I'll just give you a minute to let that flow in. And again, I would encourage everybody to pick up the mic. My presentation is a bit less transmissive. So we've got time. I think everybody's still getting a coffee, Manish. Possibly. Yeah, I share that sentiment, Lucinda. I don't teach like that in school. And also, I think there's a regime... I'm talking about the art school model, which is quite critical. So the idea is you put your work up, or you put your... Let's go to engineering. You put your engineering project up. And the aim of the tutor is to pull you down a little bit, to dress you down. And in the fashion business, we hear, well, look, it didn't do me any harm. Look at me now. Would you like to come in and elaborate a little bit more? You don't have to. But maybe give us an example, or would you rather not talk about it? Can you hear me? Yeah, I can. Yes, thank you for coming in. I had a technology teacher that I found... This was during my GCSEs when I was at school. And I wasn't very engaged in the class. So I was about 15, 16, and I was just messing about with my friends. So we didn't really get on. And I told him I wasn't very engaged with his teaching. I shouldn't have the choice of youth. So he kicked me out of my class. And then on GCSE results day, I ended up getting the best result in the class, even though I'd been removed. And he kind of said to me at the end, like, oh, just imagine what you could have learned off of me. And then as a plucky 16-year-old, I was like, yeah, I probably would have done worse. So yeah. That was my experience of a teacher being tough on me. It's quite interesting though, you remember that quite clearly all these years later. Yeah, I don't think I'll ever forget that. Would anybody else just like to join Sirup and share Lucinda? Tears, whether tears? Yeah, I don't mind sharing. Can everyone hear me? Yes, we can. Thank you for coming in. So this was a maths teacher. And I'm dyslexic and I reverse numbers. But it wasn't really a thing because I'm quite old. It certainly wasn't a thing in the primary school I attended. And we used to get called up to the front to do calculations in front of everybody, much like you've described. And yeah, I would, for obvious reasons, get it wrong. And you had to do it in a particular way as well, even though other ways would give you the right answer. And I don't think really my way of thinking aligned with the ways in which the teacher wanted me to produce work. So even when I got the right answer, I was told that I was wrong. And much like you with your writing, for many, many years, I thought I couldn't do maths and I avoided it. Turns out it's not true. It's not true, I know. Yeah, having since come back to doing a variety of things with mathematical concepts, turns out I love it. Who knew? Who knew? Who knew? Took a very long time for me to come back. I think what you're talking about is taking a long time to recover from that and you're absolutely right. Manish, would you just go on to slide three for me please? Thank you, which is, I still think about that. Now, I just published my third textbook and I think, Mr. Williams, if you could see me, I can write, but it just took me a long time. Let's go on to slide four since we've cut this material. So just to think about how this applies to teaching and learning, we've introduced some interventions into feedback approaches and we've deliberately gone for a compassionate feedback approach. And we've tried to remove that studio culture of dressing people down, thinking that tough love will ultimately peel somebody apart and build them up in a new model. Because partly we've realized and we've searched and searched and we can't find any research that says that being cruel to students helps their cognitive ability. It definitely doesn't help you remember things. We have 35% of many of our courses have disabled and dyslexic students who declare their disability on their specific learning differences. So we've got quite a large population. So now this hasn't always landed well. So what we did was we used writing feedback workshops where we talked to staff about there. We actually critically looked at their feedback text. We also introduced resources, which I'll share the links with. This is a compassionate feedback glossary on the handout. And we also do the assessment mapping for courses where we are tied together and made sure that the learning was scaffolded. So that one of the main things is not being really tough on level four students and they've just arrived. Again, really interesting about the engineering students in their first year. You know, you can't treat them as third year students, but many staff are. So moving on to slide five is a quote for you about what we are definition of what is compassionate feedback. It doesn't mean it's being really nice and kind all the time. It's about being more detailed. It's about being precise and using less tentative wording, like perhaps maybe and using English colloquialism was like, if you wouldn't mind, or do you think it would be good if you did such a thing? Those things don't translate because we have 48% of our students are international. They do confuse people with neurodiversity. We have to remind staff on the fourth bullet point that giving feedback is not about this tutor saying, I would prefer you did this because I believe in X, you know, and also about judging the work, not evaluating the performance. So, Lucinda, you know, not actually caring how you got to the end result or the process, but actually say, did you meet the learning outcomes? Fine, you know, not judging the way you did it. And we did notice when we looked at feedback samples on the last bullet point that we were not giving the same amount of feedback, the same length of feedback to students who were getting lower grades than we were to the higher grade. There was more to say about the higher grade, so they got more feed forward. Could we have the next slide, Manish? Thank you. So, I'll give you, this one is an example of having done some of those workshops with staff and having used the compassionate feedback from the language of the forms and the built-in systems to our assessment, kind of the structure of assessment started to change. So, the reflective tutorial form in first year used to say something like, tell us what you did, what are you going to do next? Give us four bullet points. So, one of my colleagues who very kindly said I could share this example started using affirmations within the tutorial form. This is in a cohort of 200. So, again, referring back to the points we made earlier about how on earth do you make an impression on a large cohort, but time language can definitely help. So, you can see, I'm not sure about that first statement. You're amazing. We really mean it. I didn't write this. So, I'm kind of reflecting, is that too much? I'm not sure. That might feel uncomfortable to you, but to this stage leader, this year leader, she felt this was appropriate. But you can see the words or what positive changes can you identify and the way you are working. Feeling equipped to deal with future design challenges. What practical skill sets are you taking forward? So, this is affirmative language. It's not corrective and it's not criteria-based affirmative. So, what happens is the students open up their reflections and give more of themselves. I'm just going to stop here and see. Would anybody like to comment on this? I think this might be appropriate in your university or have you. Do you also use compassion in your digital forms? Because one of the things I'm trying to do is get this in. When we move on to digital forms and loading things to Moodle, submitting PDF, and putting things into turn it in, we kind of use some much more mechanical language. So, does anybody have any thoughts on that? I'll stop speaking for a minute. You can pop something in the chat or you can put up the mic. I have a question just to understand it a little bit better. So, these questions, the three bullet points on the slide. Is that part of the feedback or is that the form? I'm sorry. No, apologies. This is a tutorial form. So, this is asking the students to give them the stuff. Give them the rough look. Yeah. It's a soft, you can see, it's quite a soft approach. No, I think with compassion, I do want to start using that. And I will talk a bit more in my own presentation about it as well. Okay. But yeah, I'll let other people take the floor. Yeah. Lucinda, you might have to build it up. Yeah. Yeah. I found in terms of you just mentioned, they like it when you put smiley faces in. You know, I used to hate putting smiley faces in and emojis when I reacted with horror, but I realized that they translate internationally across languages and they are emotional, obviously emotional emoticons. So, yeah. But this is the beginning. What we're trying to do also here in this particular stage is build up retention. This course has an extremely high level of retention. So, we're hoping if these kinds of approaches to work, we might put that onto our courses that have low retention. And I'll give you some results in a minute. Thank you for that comment. Manish, let's go on to the next slide please, slide seven. So, having used the feedback workshops, I just want to let you know what the external examiners said after the feedback workshops were implemented and the staff were giving kind of feedback as well. The external examiners said that it was much more detailed and tailored to individual submission and outlines the weaknesses and much more closely to the learning outcomes and assessment criteria. This was fine art. So, it's a lot less based on the subjective views of the tutor. The clarity of the feedback was commented on and house emissions. So, we were pleased that that happened. And I know what you want to be thinking for those of you in law and engineering, etc. And I've got a lawyer and an engineer son myself, you know, you thinking maybe we need more precision, maybe we need more corrective feedback, but we are also giving that as well. So, if you go on to the next slide, we have another. What did the course leaders say about the results of this? And the course leader said the first time students said they understood their feedback. Few of them thought help in interpreting what they needed to do next. That's paraphrase. So, particularly international students who needed translation on the nuances or the language. So, the course leader was delighted because she saved a lot. This is a different department over in Camberwell. Different, I think it was a sculpture department. They didn't waste, if you like, so much time going over the feedback. Next slide, please. So, just on some of the stats, I'm very keen on understanding what's happening with inclusion and with disability. So, again, using more compassionate feedback and better feed forward into all of the structures, we found the group that mostly benefited was students with disabilities and specific learning disorders. So, you can see in 2021, their attainment was at 79% and in one year it moved up to 88%. Which there were very few other factors. Actually, there was one other factor going on at that time, which is we ultimately introduced a decolonising curriculum intervention as well. But again, this is in fine art. And this is where there were 35% of the students declaring disabilities. Is it green text? These are specific learning disabilities now. So, this is dyslexia, dyspraxia. I can't remember the other one. They also went up from 76% to 88% to another stat, which was helpful. Please go on to the next slide. Thank you, Anish. So, there's a lot more to say on this, but I can see time picking. And I wondered if there was one thing that we could do to be more human and compassionate. We haven't really talked a lot today about AI and the implications of this. But I'm really hoping that the compassionate methodology is not so well done by machines as it is done by humans. Because those of you who've been to the chat GPT sessions will know that often the voice of the computer is quite bland. People have said, not me, because I don't have this evidence, but they have said it's kind of like a white male able-bodied voice. So, I'm hoping that our human and compassionate trait might actually help us to survive in this world of AI. Manish, what do you think? You've put an AI and a smiley in. Yes, I think the reason it's bland is because they want to play it safe and they don't want to do the specific spin on it. But I think with prompt engineering, you could make the same AI, which is bland on the one hand, to turn around to be a bit more emotive, a bit more careful and compassionate as you're saying. And I think what would be nice to hear from yourself in this occasion would be to maybe just filter out what other things are you telling staff to do? What are the key things? And if you told those exact same things to this language model, I'm quite confident that it'll come up with something that possibly a human would come up with as a result of you explaining to them what you want the feedback to look like. That's interesting. So, let's just see. So, I thought I was to program the AI. I'm just going back to my slides on my other screen, on to the prompts. So, if we went slide five, yeah. So, what you're saying is if we advise AI to put these things in, that might help. Yes, I think yeah. So, if we say be precise and avoid tentative wording or colloquialism, whatever, then it will do that because it's like a machine. It takes input in natural language and puts out natural language. And if you give it the right parameters, it will try and obey you. So, yeah. I mean, what we hope to do is, of course, like everybody else, help use AI to take away some of the hard labor and then give ourselves, buy ourselves a little bit of time, but even for face-to-face conversations and negotiation and alliance forming with our students. You know, the time that we feel is taken away by filling in forms for 200 students. That's not as many as Fiona mentioned. But let's just, yeah, thanks for that, Manish. Is there, so can we go back to slide 10, please? Yeah. And then just to 11, please. So, I just want to say thanks for that. I've used this illustration, which is the University of then and the University of now. You know, I have a think about that later. This is this sort of stereotype now that the University is a full of activists. What does he say? That he's on FaceTime, they're full of anxiety and they're remote. Whereas some of the tutors who learned to do some of that tough feedback were part of a past university that perhaps doesn't exist anymore, which was really did privilege some of the some of the more gifted people. If you go to slide 12 for me, please. I'll send these references over. I'll send these references over and also the link because we'll be able to share the slides, won't we? Manish, yeah. Okay, so can we can we stop sharing slides now and just perhaps just have a conversation if anybody else can do that? And I think we're just about at time. Yes, don't worry about the stopping slide sharing. I can't at the moment. But yeah, feel free to ask questions via chat or by my microphone. If anybody we can we have a few seconds, a few more seconds. We do have also we have reduced awarding gaps in the black and minority ethnic areas using this method. But I published that elsewhere. So I wanted just to keep this to kind of almost the vocabulary just to make that point today. Now very important point and especially with the tools around I think that there is something definitely to be done in that area. And I tried doing something within my coursework. So I'm kind of teaching into my own content later on. But yeah, I'll say that for later. But anybody else have any further questions for Kat? Hi, Kat. That was really interesting. I really like the idea of compassionate feedback. And obviously on having a lawyer and an engineer in your family. Would you what I'm just thinking? I think with like the art school because I work across the university, this would be easier to get buying. What would your suggestions be in terms of getting buying from like the law school, for example, or the engineers and stuff like that? Okay, well, the first thing to say is that when a student is under any kind of unnecessary anxiety or stress, the brain shuts down, you know, the short-term memory and the hard-term memory tend to freeze. It's one of the reactions. But if a student has any other lived experience of having been told off before as a child, it can be triggering. So for instance, I mentioned about being told I was a rubbish writer. If I ever get comments about writing that are perhaps not very kind, I would immediately think, oh, that's because I'm a terrible writer. So that's one thing. It's being very mindful that you have intersectional needs across. You don't know whether your international student is dyslexic or whether they have a hidden disability. That's one thing. I just had another point there. The other point was about performativity as well. We are making sure to be fair in our marking around unless we ask a student to present well. So for example, in legal context, if you're, do you mark students on how they present, how they speak, or are you marking them on their text and their written abilities? Just to clarify what you're giving them feedback on. Also, that's what I was going to say. We're looking at trauma-informed pedagogy now and nonviolent communication. Trauma-informed pedagogy, we've noticed as you have that the students coming out of the pandemic don't have the same behaviors and skills that other students have. And that does come to law and engineering as well. So I think being kind and compassionate can help those who've perhaps been through bereavement or that terrible lockdown that they were in, which affected their lives much more than ours. So I'll stop there. Does that answer your questions, Stuart? Yeah, that was great. Thanks, David. Yeah, I'm going to use some of that. Thank you. I mean, what we can partly do is I can share some of the materials afterwards. With you, particularly, that would be pertinent for that. Yeah, that'd be great. Thanks, Kath. You're welcome. Okay, so maybe we can bring this one, the session also this part of the presentation to close. Thank you very much, Kath, and a virtual round of applause. Please remember to put some comments about all of the sessions that you've attended towards the end of the session, if you can. We can now invite Joel Mills from BPP. Joel, can you hear me? You have the ability to upload your presentation. You know how to do that? Share the application or screen, but it's not seeming to do it. Okay, let me see if I have. Yeah, you are on the presenter list. So do you see the sign saying share content? Yeah. Okay, if you click that, then there's either an application share or a file share, and nothing happens, you're saying, when you click that? I'm clicking on sharing both window or entire screen, but I can't see that coming up there for everybody. Okay, that's a bit of a new one for me. I usually it works. I don't know if you want to email me your presentations. I can do the same. You have done. Okay, let me just quickly go in. Yes, another opportunity for leg stretching, if people want to. Yes, I can see your email now. So I'll just download your. I'm almost there. Your audio is splitting up. So let me try this time to do a proper screen share rather than, how do I do that? Entire screen. There we go. I'm up and running. There we go. You're up and running. Okay, great. Okay, over to you. Please tell us a bit about yourself, your work, and use your 30 minutes for the talk and the Q&A. Yeah, thank you. Thank you very much. Hey, good afternoon, everyone. Thank you for giving me the opportunity to talk to you about our use of and testing of the term it in AI tool at BPP. I am associate professor Joel Mills. I'm head of learning and teaching at BPP University. We're a privately funded university and we have a number of different arms to our products. We also have professional qualifications that we deliver and also work for clients as well and delivering specific and bespoke training for them. We have a dedicated university, which is a separate entity in its own right. And this is where our AI tool sits. As you know, we put content through Turnitin to detect for similarity and originality. And recently, Turnitin have introduced a new tool which some universities have opted into, majority have opted out of at the moment. So that's the current landscape. My background of research at the moment is around the use of AI in education, specifically around assessment, but more widely in terms of its benefits to staff as well as students. And we'll cover some of that content as well. I've been experimenting very heavily in the last few months, of course, with the rise and rise of AI, chat, GPT, BARD, Bing. We can name a million and one other AI tools that are out there right now. And every day this landscape seems to change. So without further ado, this is GPT4. This is what we sit now as AI and what is AI good for? Well, we all know what AI can do. AI can write based on prompts that we put in. It can write content for us. It can develop narratives. It can solve problems. It can write code. It can even now generate images, video, audio. There's a whole range of things that have exploded onto the scene right now. For example, this very simple prompt in GPT4. Old MacDonald had a farm. Chat GPT will complete that entire children's story for you and children's song for you with all the verses and just write it out. Just from a few simple words, it knows what to expect to come afterwards once you put that basic prompt in. So why is this a threat? And what's the interest here around AI internally? Well, the fear is that students will use AI to cheat. That's the fear that we've got. And an example here would be we set an essay question, such as a very simple one like this, which is what I've used to train AI on. And OpenAI will then write that essay for you. It will even try and provide you with references. It'll even try and write them in the Harvard citation format. So the fear here is that students are submitting or are going to use it to submit and write essays for us. Now, the concern here is obviously as part of being an academic is understanding that we are trying to assess the students' work, trying to assess the students' thinking, and understand that they have a level of comprehension of the tasks that have been set for them based on the learning that they've done so far. So how does this fit in with what's actually happening? Well, students can complete essays and turn it in, can attempt to interpret those essays based on a number of algorithms that it uses to detect AI. The students are using AI to cheat. So how will we know whether they're using AI to cheat? Well, as per the turn it in model, you can take a student submission and put it through a number of different detectors tools. Before turn it in came on the market in April, there were only a few handful of tools which we could use to detect AI writing. So taking the output of a simple essay question like this, I put this essay result from GPT, which was the AI engine generator, and I put that through a number of different tools, a couple of different tools in fact. So the first tool I put the resulting essay through was GPT0. Now GPT0 is produced by the same people as chat GPT. It's open AI, not only have they created an engine, a large language model which generates content, they've also created a detection tool which reverses the algorithm to detect whether content has been written by AI. And it uses a combination of measures which output is at what's called a perplexity score, how complex the language used in the inputted data, and a burstiness score which is the kind of humanness of the score and the higher the score, the more likely it is written by AI. So the results here were very, very interesting. Taking this essay result from the essay we see in front of us and putting it through GPT0, of course open AI was able to detect its own output. Of course it was, it's the same company, they just reversed the algorithm and it detected it as entirely AI generated with those particular scores. However, I wrote exactly the same essay question and fed it into Bing Chat. Now Bing Chat is based on GPT4. You don't have to pay for it, anyone can use it, whereas at the moment the open AI model is a paid for subscription model. But in inputting that question to Bing Chat, I took the resulting outputs, put it through GPT0 and GPT0, thought it was entirely human written. So very low perplexity score, very low burstiness score, it thought it was very, very likely to have been written entirely by a human. Yet it was exactly the same essay question and used the same engine to generate the results for that question. So that started to cause me kind of alarm bells ringing, thinking okay, well there's something going on here, it's not kind of detecting this and maybe we should think about this in terms of assessment and how we're going to manage this. I knew the Turnitin tool was coming at this point, but we haven't got access to it. So I was watching with baited breath to see how this would work in Turnitin. I took the same output and put it through a different GPT detector and this one was called smodin.io. And once again, I got similar results. So I've got a control model with GPT and GPT0 and I put it through a different detection agency and got similar results. So again, a 94.9% likelihood of being completed by AI, but smodin thought that being AI was a 0% likelihood of being complete AI content. So this is the scene. We've got students potentially using open AI to create essays, writing references, putting structure and content together, putting argument together. And at this point in time, which is about March this year, the current GPT detectors were giving me mixed results. On the one hand, it could detect AI if you used match for match model, but if you used a different engine to generate the content, it was coming out as entirely human. So that was the landscape we started with. So then introduced Turnitin. On April the 4th, Turnitin introduced a new tool to all institutions that was put directly into live Turnitin production environment. We had no option at this point to run it in a beta environment, in a testing platform or a test account. So institutions had to either decide to opt in and have the tool, in which case it'd be visible to everybody, or opt out, in which case you wouldn't have access to the tool and you wouldn't be able to test it. As far as I know from my discussions with Turnitin, this is going to continue until January 2023, when the AI tool will move across from the existing place that it resides, which is within the similarity tool checking service. And it will move into the originality tool, which checks for contract cheating and other such devices. So if you want to get the AI tool from January onwards, you'll have to pay for and buy into the contract cheating tool called originality and the AI tool will sit there. So we've got this period, this window now between having it turned on. We made the decision to turn it on on 4th of April and so we got access to the tool and it's enabled us to run tests between now and January on the confidence levels we have in Turnitin's ability to detect AI writing. So AI came about. We've got this tool available to us and what was really interesting was the results when we started feeding in some of our essays into the Turnitin AI tool to provide comparisons with our control samples in the other open AI detection models. So at this time, we decided to produce this statement, which I'll come back to in a moment. We drafted as an institution a decision on how we were going to use AI and allow students to use AI within their assignments. So our position at BPP currently is that students can use AI to support their learning. They can consult with it. They can use it to draft frameworks and structures for their assignments, which they would then need to revise and submit with significant revision to be their own work. We're saying that initially that students were not committed to present the output of generative AI as their own work for summative assessment. So there has to be a rewriting on the student's behalf with the understanding that we would then be able to see whether Turnitin could detect the AI generated content and put this into Turnitin and then identify potential academic malpractice. So this is the statement that's interesting at the bottom. Students suspected of using substantive generative AI and submission submissions without proper attribution on the source of their work, which is another moot point, will be referred for academic misconduct. Now, this to me does not fit in with the Jedi model particularly well because we are applying punitive measures to something where we have not necessarily yet got the expertise or even ability within the tools to successfully detect AI. My suspicions were already aroused through my previous testing, but the academic quality unit wanted to make sure and wanted to come back down very strongly in saying we don't want students just submitting AI in their submissions and passing their modules with content that they haven't actually written. And again, you can begin to imagine the headlines that if it got out and the Dailies decided to get their teeth into some freedom of information requests on the number of students who had AI detected but weren't penalized, they'd have a field day with that with a headline. You can just see it the next day, which says such and such university allows students to cheat in their assignments by using AI to generate all their content. So this was concerning to us. So this idea that we can actually detect it needed very rigorous testing before we can actually apply this. So this statement was written, hasn't been released yet because the landscape of AI is moving so fast. Things are changing on a daily basis, as we will see. So enter, turn it into the market. And we took these essays and we put these essays into Turnitin. And again, the essay questions were very simple. We saw the first one before. You'll act as a news reporter, provide with a short essay on the politics of the moon landings. I took my essays from the previous samples and I put them through Turnitin. And I've got the dates here I put them through. So it's quite soon after the Turnitin tool was released on April the 4th. And we can see here we've got the first essay, which is straight copy and paste from GPT-4. And in the slide, you can actually click on that link and it will take you to the source of that essay. So you can see how that essay was prompted and the output of that essay. So the straight output of GPT-4 was detected by Turnitin as 100% AI. Great. It's working. Fantastic. Happy with that. So it's detected chat GPT-4. Interestingly, it took the output from Bing AI chat. Unfortunately, at the time of writing, you can't get Bing history. So I can't share the history of my Bing chat with you. Although that's just been released. I told you it's changing fast. Bing have now introduced chat history into Bing as of today. But I took the Bing output and put it through Turnitin, 100% AI. Great. This is working. My confidence is rising here. However, we've got to think about the other tools that are available to students as well. So taking the output of chat GPT-4, the same output as essay one, but putting it through a tool called Quillbot. Now I don't know if you've heard of Quillbot, but Quillbot allows you to paste content into its engine and it will paraphrase it. And you can go on paraphrasing it and rewording it to your heart's content and you're happy with the output. You can then download the output and use that as your submission. And on the free version of the web browser, and I had to do this paragraph by paragraph rather than the whole thing because of the limits of the free version, the outputted essay came back as a 0% Turnitin score. Yet we know that that essay was 100% AI generated. So this was really concerning and of significant interest to Turnitin who then contacted me when I explained to them on my monthly call with my Turnitin agents saying that we had no confidence in the AI tool at this point. I ended up having conversations with Turnitin engineers, their product team, their developers of the new paraphrasing detection tool that they're building. I've had these conversations and they're very interested in these results. So they're listening and they're working very hard to try and improve their tool. I then repeated the same essay submissions on a different assignment tool, Turnitin, so I submitted exactly the same three essays and I got exactly the same three results. So at least Turnitin is interpreting it the same and I'm getting consistency of results from Turnitin even if they're wrong, I am getting consistency. So again, as a control model, that's quite helpful to understand the position. I then repeated this with a separate essay which is you'll act as a business analyst and it's looking at the business strategy. So this was my prompt to GPT and to Bing. And again, I've got very different results. Again, same structure. We've got GPT4 at the top with a 67% Turnitin AI score. Bing this time only resulted in 8% Turnitin AI score. But this time it detected the Quillbot version as 100% AI. And yet when I put the Bing output through the Quillbot, this time, sorry, the original output again, but this time with a different fluency setting in Quillbot, I got a 48% result. So with a little bit of manipulation, we seem to be able to deceive Turnitin quite considerably. And in this case, in Samsung 2, which was the output straight from Bing, it only detected an 8% Turnitin match. So this caused me to then revise our statement out to staff and to students. And our current position is this. We're talking about a confidence position to our senior management team and to students. And the confidence position at the moment is we have no confidence in the tool to detect AI. So at the moment, we are taking this stance, which is that the AI score is not to be taken into consideration in any of our marking until such time as we have further confidence in the tool and significant reduction in the false positives that are being generated. So it's very interesting from our point of view because our tests are showing at the moment that detecting AI is ultimately going to penalize students, which provides an unfair assessment of their work. And again, in discussion with people in the sector, through Twitter, through my social media networks, through other webinars I've attended, and just following the general AI discussion in higher education. It seems to me that the sector is heading towards an embracing of AI to support the students' learning and perhaps starting to maybe assess a default standard essay, which students then improve on. So for example, if we feed our essay questions into GPD, take the resulting essay and then give that to students as a basis to improve on or critique or identify where there are flaws. That might be a way forwards to use the AI more effectively and embrace it and embed it into the assessment process. Or potentially, we have the opportunity for students to prompt these AI models and we start assessing their prompts rather than necessarily their output. So how can the students, if we're training students to work in the next generation of business people and practitioners, then perhaps we need to train them in how to do prompting. And part of that is the assessment in how well they can use AI to generate something and then fact check it and take a reality check on the output of that. And what we should be assessing is their skill in using the AI, not necessarily in the content it's producing so much. So there are lots of unanswered questions here and it's a very, very fluid landscape at the moment. But at BPP, what we're trying to do is look at all the options, not take such a hard line stance on just blanket no approach and understand better what the tools can do for the student as well as perhaps detect where we might need to go, well actually this bit's written by AI and take maybe a cross match. One of the things I've suggested to turn it in is that actually we provide threshold settings. So maybe in future we'll see a version of Turnitin which allows a certain amount of AI to come through the detection tools along with the originality detection, a bit like how we have already the ability to create small word matches or large word matches or include the bibliography or don't include the bibliography. My suggestion to Turnitin was that perhaps we combine this into a set of sliders of a threshold that an institution can set and say actually will allow 10% of our essays content to contain AI generated content because we realize that that is a tool that is being used to help students craft their work. So rather than taking this blanket if you've used AI it's a no, we're trying to embrace AI and work with that. But as I say it's a very, very fluid position and not one that is easy to nail down. So that's my presentation. I'm happy to take questions on that but I just wanted to kind of show you what we're doing. I'll be very interested to hear from other people who have maybe got access to Turnitin and are using it already and perhaps or maybe people have got questions on how it's detecting the AI, anything like that. But thank you very much indeed for your time today. Thank you. Thank you so much, Joel. Yes, any questions for Joel? Fantastic presentation. Very good comparison, Sean. And when AI chat GPD came along people were saying either yes or no for it or and now this AI tool which is detecting AI. Again, it's interesting to see how you've found that it can be an issue within the Jedi framework. And that's an important message. I think we're really grateful for your contribution. So over to people if they have any questions either in chat or through the mic. Yes, I think that's a very good point. It's in order for assessment to be fair and transparent, I think it benefits both sides, the staff, the institution, the staff and the students if we understand where we're coming from and a degree of acceptance that this is out there. We're never going to put the lid back on this box. AI is out there and we will see more and more different LLMs, large language models emerging. We've already seen for example a law LLM emerge onto the market. There's been adverts on Facebook for it and things where you can consult with it from a legal capacity and it's a dedicated LLM that's trained on legal matters. So how much longer before we're going to get for example an education LLM which will help us with assessment and drafting and training students to write better academically, for example. But these are springing up all over the place and at the moment the whole AI market is completely unregulated and that is the concern. We haven't even touched on data privacy issues or ethics around using AI and where our data goes, how it's stored. But I think it's going to head towards regulation as my feeling of some kind but we're never going to not have AI. That's the thing we have to be aware of. It's not a case of well we don't want to do this. It's a case of it's here. So how do we work with it rather than just try and close our eyes and bury our heads in the sand and pretend it doesn't exist? Can I just come in there? Please, please. Just to set aside this terrible echo on my voice here. I hope you're not hearing that. Just to say that what worries me is the attitude that students will cheat. We're slipping right back to the deficit model, aren't we? We must remember that every student who joins a university course wants to do well and wants to do their own work and sometimes the reasons why they do get extra help or use non-standard methods are because of the pressure that they're put upon them, sometimes by their families or by the industries, et cetera, et cetera. So I think the idea of the universities allowing say 20% of content to be AI generated is great, but what you talked about, Joel, about asking students to generate 200 words on a topic and then critiquing it, I think would be fantastic. Yeah, it's an idea that's been mooted around the sector so I'm not alone in that thinking and we're yet to test that. We're yet to put that into practice. Of course, people are generally quite resistant to change and a bit cautious, maybe even fearful about going down that route. But it's only going to take a few handfuls of people across the sector to go, these are the results I've got from doing this and this is working for us that we'll start to change hearts and minds on this. Yeah, in this open area, we're going to have to have some trust, aren't we? Some ethical, the early signs of an ethical stance, really, that we will assume you will not use AI. But you see, one of the things is that it's all about the questions that we ask the students, isn't it? Because we're often asking for personal identity to be put in the work or environmental solutions or, again, I'm coming from an art and design background, there's a lot of performance-based courses, etc. So AI does struggle with those things. And we might be on a winner there. The creative industries might actually be able to help. Okay, I'll stop. Yeah, I mean, it's very interesting on that point because my background, my first degree was in graphic design and I taught design for a number of years before I went into education management. But we're already seeing students, for example, photography students who could use mid-journey to create photographs and they haven't taken a photograph, they've prompted an AI to generate a photograph. So the skill then becomes a bit like the transition from traditional photography to digital photography and the involvement of Photoshop. We started then to move towards assessing the skills of the photography students in their use of Photoshop or Adobe Lightroom in improving and enhancing images or making new images or creating digital artwork, which couldn't possibly exist in the real world because it's being mashed together digitally. So we'll perhaps see a transition in the arts world where the skills we start to assess are in your ability to prompt mid-journey to generate the images that you want. Because interestingly, we're already seeing marketing companies and advertising companies using generative AI images for their material because it's cheaper because they can generate them on scale, they can generate them on demand. So the skill, the commercial outlet for the photographer, another string to their bow is going to have to be generating AI content. Yeah, and actually I think that the worldwide audiences of our student body could be really helpful here in that some of our populations are ahead of the curve than the Europeans perhaps. So there's fashion generation going on, fashion collections. There's a lot of mimicry and fakery going on, but if they could be used in a positive way. So yeah, we could work with you on some of this if you're interested in imagining how this could work and imagining scenarios. Yeah, certainly. Thanks. I think I want to add something here, because when you mentioned something about making students use the AI while assessment is going on, as you were saying, and that triggered a chain of thought in my head. In my slides, I've talked about using chat GPT as a learning tool for students. And basically, I capture those interactions using my system and basically you can see how the students are using it. And if that, if you're assessing those interactions, you're assessing how their skills are around creating those prompts, you are looking at a different set of skills which will be useful, by the way, in the future workplace, because AI will always be around us in our workplaces. It is currently available for me to, I don't know if I did a systematic review and I wanted to put an abstract into the AI tool, and it can do a Pico analysis for me very quickly and very accurately most of the time. So AI will be around for people to use in their workplace. So why don't we then start assessing on the skills that are needed to use the AI, as you were suggesting. So I think there is something there. But yeah, keeping to the focus, I think I'll let you answer any other questions that are there in the chat if they're relevant. I think we've got no more questions in the chat. Okay, so there was a comment from Alex. I'm just reading it through. Yeah, okay. Super duper. How are people feeling to continue? I've got my slides to go through. If people are wanting to sit another 20 minutes, I can do that. Otherwise, I can also do it on another Thursday. I'll take your cue on that pretty much, because it's been two hours, so I'm conscious that. Thank you very much. I do have to leave. I've got the school runs and things to do. But sure, sure. Thank you. Thank you for your recording. Thank you so much. Thank you so much for your lovely presentation. Much appreciated. Okay, so if people are leaving, that's fine. Please do complete the survey. And depending upon how many are left, we can take a call on whether should I continue with my slides, or I can do it another time. I'm quite happy to do it either way. Manish, when is the next session of this where we can pick some of these things up again? Did I see something in June about inclusivity? No, we do every last Thursday is a definite for us, apart from the summer months. Sometimes we can't do it, because all of us are busy and all. But every Thursday, last Thursday of the month, but so there will be one this month. And it is around that topic. But when we have more speakers, when we have more people to share their work, we can trigger an extra session on any of the Thursdays, basically, at 12 to 1. Does that answer your query? Yes, that does. Yes, yes, because we're over time and I'm afraid I'm going to have to go as well, but I want to see your slides. No, I'm quite happy to email them to you to show you before. Thank you. Thank you. And it's been a pleasure to join in today. Thank you so much for coming. Yes, thank you. Thanks a lot for sharing your.