 Hello everybody and welcome back for our final session of the day and what a session it is going to be. So we've just been spending the last five minutes trying to get everybody lined up and sorted out. But I have to say I am so looking forward to this session, which is going to be all around Gen AI and the student experience. And we are going to have some students join us in just a minute. But before they do, let me introduce our three panelists and who are going to run the session. So that's going to give me a bit of a break. I'm going to sit back and enjoy. And now these three are, I'm sure, not strangers to to the group here at all. First of all, we have Sue Hi Sue Sue Beckingham, who's a principal lecturer and the LTA lead in computing and addition to teaching at both undergraduate and postgraduate level. She has an academic development leadership role where she provides support and guidance related to learning teaching and assessment. And in 2017 she was awarded a national teaching fellowship. She's also a fellow and executive committee member of the staff and Education Development Association. And we have Peter Hi Peter Peter Hartley. Hi has had a long career as an academic lecturer and professor of education development. He's now working as an independent consultant. He works on online learning materials while maintaining his profile in educational strategies and associated evaluations, research and publications. And then we have got Louise Drum. Hi Louise. Great to see you. Louise is an associate professor in the Department of Learning and Teaching Enhancement at Edinburgh Napier University. She's a senior fellow of the Higher Education Academy and a fellow of the staff and Education Development Association. And you're all very welcome. And I am now going to hand over into your extremely capable hands. Thank you. Thank you very much for Sharon. Can we bring the students to the stage please? That's wonderful. Okay. So I'm going to start off the proceedings by introducing the students. This is a student panel as you can see from the program. So starting with Alex Walker from Sheffield Hallam University. He's undertaking a master's in finance. And then we have Frankie Wardale. He was a second year student and he's undertaking a degree in psychology. And then we have Alejandra Rodrigo Sosa. She's a PhD student third year in bioinformatics. And we have Amparo Guimine Rio and she's a final year student undertaking biomedical science. Unfortunately, our fifth student is unable to attend Mothio from the University of Kent, which is a shame. So we're going to start off the proceedings by asking Peter to explain this concept map. Peter is the king of concept maps. And let me just share my screen. This is a concept map that we've been using. Sue and I have been using for about six months now. We're both very heavily involved in the Education Development Association. And so the context for this is when chat GPT exploded. We thought that they needed to education developers all across the UK needed to know what was happening with chat GPT and generative AI. And so we ran a couple of webinars for them. And we then got various invites to go to universities around the country and we're still getting invites to go and talk to folk about about what's happening to this technology. How is it developing what it's based on? And of course covering a lot of things that we talked about already this in today about the issues of bias and hallucinations, etc, etc. And this is the slide we tend to use as our final point of discussion. Where do we go from here? And we've used it as a point of discussion to get folk in the audience to say, well, how much is your institution doing to help people prepare for generative AI? And if you look across that diagram, most organizations now that we have found that we've talked to are doing something about regulations and procedures, often using things like the Russell group for principles, etc. Most higher education institutions are doing something about professional development. Are they doing it for all staff though? Question mark, because as all staff will be affected by this development, whether we like it or not, and have the institutions got the appropriate technical development and support. And we're not sure that they all have. So institutions do seem to be working on the left hand side of this diagram. But we think there's also real opportunities for institutions to spread out, get involved in consultations, collaboration, co-creation towards the right hand side of this diagram. Is there a collaborative sandpit, the idea of a sandpit, the idea of a development area where people can come together, academics from different areas, academics and students trying to collaborate? Because none of us really know where this technology is heading, so we're all kind of novices to some degree. And what about student engagement? What are the institutional plans for student engagement? And as yet, I don't think we've hit an institution that's got all of these ducks in a row that's hitting all these buttons. But things are changing, you know, in the six months or so that we've been doing these kind of webinars and what have you, we've noticed obviously increasing numbers of staff getting involved, increasing use of technology. But on the other hand, really chat GP, chat GPT is a technology that most people have used, not much perhaps experimentation around other solutions, other possibilities and very little use of images, which we think is a bit sad, because as previously said and today, stereotypes abound in generative AI and we see it as a mechanism actually for unveiling some of those stereotypes, for actually getting staff and students to discuss them very much as the previous speaker, Tarsen, was talking about in terms of the social work. So we see things are changing, but are they changing fast enough? I'll hand back to Sue and Louise. So over to Louise, because what we want to begin to share through both Louise's work and then on to our students, how we have been working in partnership with students. And whilst we've done a small amount, we still need to do more. So over to you, Louise. Thanks Sue. So I think one of the things that struck many of us, I suppose, during the last academic year was perhaps the sort of absence of student voice in a lot of the discussions that were happening. And talking back to Helen's keynote this morning, her fantastic keynote, about I suppose the decades of research in AI, not just AI, education and AI. And I think one of the interesting things for me that has happened to us all of the research previously, in my opinion, from what I've found has been very much about institutions controlling the AI, this personalization idea, and the idea that this is something that could be within a system that's in control, that we are in control of as institutions and maybe as staff members of those institutions. But what happened was these tools got thrown over the wall into the wild and students took it upon themselves to use them without our control. And I think that's something that's interesting to take on what has happened, I think, over the past year. So what we did at Edinburgh Napier University is we set up an anonymous padlet for students to tell us what they thought. So, I mean, in terms of research methodology, it's not terribly complicated. You can see here what we are asking the students. And the primary kind of agenda behind this for me was that it was anonymous and that it was absolutely watertight, that students were assured that it was anonymous and there was no link, because we didn't know what to expect in terms of people maybe admitting certain things. That QR code there, and I've got a link I can post as well, is a link to all the resources about our research project, a video that explains it a little bit more, so I won't go into it in much detail here. But we've run it twice now, so we've done data collection in spring of last year and we did it. It just finished data collection from our autumn tranche. So I just want to give you a little bit of an update in terms of where we were maybe in spring and what might be the sort of sort of more nuanced stuff that's happening now. In the springtime, we had a lot of very positive responses from students, a lot of people saying that they were using it. I think there are limitations in our data. There would be people who didn't, don't use it and therefore didn't contribute to this or maybe did use it and maybe ways that may not have been 100% ethical but weren't prepared to admit that. But in the whole, most people were hugely positive about it, about its effectiveness. They were looking at it in terms of having conversations with it, helping their understanding, using as a dialogue partner and most damning of all for us of course as they were saying, we got fast responses in comparison to talking to our tutors or emailing our lecturers. But there was a certain amount of awareness that there was limitations and inaccuracies in it as well. Fast forward now to the past six weeks, the kind of things that are coming through is quite interesting. We've got, and this links to some of the questions that came up in previous sessions, some people saying that actually, you know, I like the process of writing. I don't want my writing to be replaced. Some people saying any use is cheating. So we're getting really diverse kind of opinions now. And then other people think it should be used and there's absolutely no question that it should be used and it should be part of education. Consistent from last time round and this time round, helpful for disabled students, for neurodivergent students, massive enthusiasm and as an assistive technology. And also the idea that it expedites certain tasks like synthesizing and summarizing and searching. There were some quite plaintive posts about saying that it's not fair. I've done lots of work. And interestingly, some people concerned about outsourcing skills and maybe not talking about themselves, but talking in the abstract about are we outsourcing critical thinking and is this helpful or is this limited. So that's where we're at at the moment. I'd encourage people to maybe follow that that link that I posted. If you have any questions, please get in contact where our data set is open or methodology is open if people want to replicate this elsewhere. But that's just a kind of a snapshot of where we are at the moment. And this research was conducted co conducted with students as well. And we are doing it this time as well. And that's another thing that I would encourage getting that student voice into the actual research itself is really helpful. And that's all for me. I'll pass back to Sue. Wonderful. Thank you very much. And we're actually taking up at Sheffield Hallam. Louise has kind of offered generosity sharing all of this. And we're going to be undertaking that in a semester to and working with with students. So now to the exciting bits, we're going to ask one by one, our student panel to comment on their experience with generative AI and and to maybe say something a little bit about their involvement with their own university in terms of any partnership activities that they've been involved in. So we're going to start please with Amparo. Hi, can you hear me? Yes. Yes. Hi, I'm Amparo and I'm aiming to specialize in data science applied to neuroscience. And I started using AI tools at the same time. I started intensively studying data science on my own, which is around two months ago, which is not so long. And I have been using different types of AI tools and playing around, exploring what they could do and couldn't do for you. And in fact, I'm using them in my honours project and will be adding an attached file with the list of the AI tools I have been using and how to show that these tools do not replace my responsibilities, but rather they enhance my productivity and efficiency. But focusing on generative AI, I use them for a wide variety of tasks. Since English is not my first language, I use them to help me with any queries regarding grammar or phrasing. And in turn, this also helps me continue improving my English as well. As a neurodivergent student, for example, regarding reading papers is quite a difficult task for me since I struggle with my concentration and I misread words or not process what I'm reading properly. Sometimes this is just awful and unbearable. But summarizing, I realized that summarizing is part of the paper first with generative AI and then reading it makes these tasks much easier for me since I guess that's because what I'm reading is not new anymore, new information. These are only two examples, but of course I use it in many other ways. However, and I can't stress this enough, you need to have a good understanding of the topic you are using AI tools for because generative AI can't and won't do the job for you. And that's more or less my experience and later on I will be happy to reply to any questions you may have. Thank you Amparo, that's wonderful. Could we have Alex Walker please next? Hi everyone, my name is Alex and so I've been using chat GPT primarily in its inception, so I discovered it I think would have been, well, probably a week into when it was launched by Open AI. And since then I'd say my experience has been quite thorough. So over the duration of the time I pretty much use it as it's basically was replaced quill bot for me. So I use it as a more advanced paraphrase in my opinion whereby I can feed it the inputs I out in a certain style of voice and get the desired output. And then more recently in March of this year when GPT 4.0 was released, it's really helped me massively in terms of being able to program. So I started learning how to program last year and found that my progress was quite slow. It took me a long time getting familiar with different inputs in using the language R and Python. And using 4.0 has been a bit of a game changer whereby now if my code's gone wrong, I can input it into chat GPT, find out where I've gone wrong, make notes and essentially now it eradicates the time that I was previously spending on quite menial tasks. And it means that my coding abilities become more proficient. But over the last year really there's been two main concerns that have sort of emerged for me. And the main one being that within my faculty at university, the information from lecturers and their opinions on chat GPT are completely different. I've had some particularly within data science and analytics that believe that these tools are brilliant, much as when Google was first launched in the early 2000s, like these tools are here and they're here to help you so you can use them. And I've had others who seem that these things are almost the creation of the devil and pleased to not use them. And if you do, you will be caught and there will be reprimands. So it's really overwhelming and you feel that it's like a moral question. Are you doing something that's wrong? And then the other one is that students aren't actually being taught how to use them. So 12 months ago when it was first launched, I lived in quite a large household at university last year. And each member essentially of my household would find out that what chat GPT was and its capabilities I recall having one friend who showed me that had written an entire essay for him and he showed me that it's great. It can reference and we sat down and looked through the references and all 25 of his references didn't exist. So the problem is students aren't being taught to use these. And I think it's really important over the coming years that there are some sort of formal training, what shape or form that entails. I'm not too sure, but I definitely think that's really important right now. So I'll pass it back to Sue and Peter. Thank you Alex. Can we have Alejandra please next? Okay, can you hear me fine? Yes. Yes, okay. So my name is Alejandra and as a PhD student in bioinformatics I think I'm quite up to date with the latest advances made in AI. And especially but not limited to the use of generative AIs for both writing code and editing my texts. So my personal experience as a computational researcher, I have explored quite thoroughly multiple tools being ChatGPD the first one I ever tested. So my hopes when I started using it was to speed up my code writing and more or less in the line of what Alex has mentioned, I've actually found that my code writing has been quite substantially being sped up by the use of ChatGPD. After that first use, I started exploring other more specific AIs such as what does this code do to perform some sort of reverse engineering and help me understand and clarify specific pieces of code that maybe I wasn't too sure about how they worked and whatnot. I have also explored other generative AIs such as perplexity AI and whatnot. I found that they all have their advantages and their drawbacks but that's something that we can clarify later on in the questions. Now the thing is after using these AIs, I have arrived to a very straightforward conclusion. I believe that AIs in general are quite a very useful tool as long as you follow certain guidelines. So probably someone has done that better than I have but I believe that I can summarize this in three points. So the first one in my opinion is that as long as you can accurately work where your intentions are, you are specific, you are clear on what you're saying, then you can actually use that tool to your advantage in a quite efficient way. Now the second point I would say is you can also find the appropriate tool for your purpose. I know that we're using mostly ChatGPD here in this webinar but there's life beyond ChatGPD. There's many more AI tools so I believe that there's probably a right tool for the right purpose. And the third point that I would mention is you have to have the right expectations about what's going to be generated. Some of my colleagues here have mentioned already you cannot expect AI to do your whole job for you. So maybe this might be true on certain levels. Maybe on a first year of undergrad it might be a bit more true than on a fourth year. But definitely talking from a PhD, third year PhD level, I can say that AI's cannot do my job for me. Simple as that. No matter how hard I tried, whenever I got too specific on my coding questions, the AI grew increasingly confused to the point of repeating the exact same hallucination over and over. However, it did stay up by at least the factor of my code writing when I asked it to generate simple templates. And in a similar fashion as my colleague Amparo, I'm also not an native English speaker and I found it quite useful to sometimes rephrase my sentences in a way that's a bit more suitable for English speaking instead of following a Spanish speaking grammar that doesn't make much sense in English. So the end bit of what I'm saying is that even though I'm an alumni of Edinburgh Napier University, so I did my masters there, currently I am doing my PhD on the Royal College of Surgeons in Ireland, Dublin. And I have found that the policy of my university regarding AI is quite confusing. So on the one hand they have a webpage where they mention certain tools and certain guidelines on how to use them and how to not use it. But then on the other hand, they fully mention, for example, if you're trying to use AIs for your PhD thesis, they blatantly say out loud, no you can't use them, that's it. Which is a bit confusing because you're providing some guidelines on one hand and then you're just saying an entirely different thing on a different session. So it would be great if we could achieve some sort of, I don't know, standardisation above all higher education universities. So that's really big, but it would be great. And that's all I have to say for now. Thank you very much for that, that was great. And now over to you, Frankie, please. Are you muted, Frankie? We're still not getting you, Frankie. Anything else? There we go, yeah. So I'm Frankie. I am a undergraduate psychology student at Sheffield Hallam. I'm also part of the university student research team. So I'm currently leading a project on student opinions on the use of elicit, which is, if you're not familiar, it's a personal kind of research system. So it will help summarise and find articles for you. It can summarise the abstracts of the top four papers it finds. So we're looking at, it's in the development phase and we're looking at getting a small group of students to use it in an assignment, which will give proper guidance around so that they're not punished for their involvement in the study. And then we're going to get them to use it and then get their feedback. And we're really looking into and really interested in how an AI tool might affect a student's ability, key academic skills, especially with future development. So undergraduates don't have the same level of skills as a PhD student or a master's student in terms of a literature review and a literature search. So we're kind of concerned, do people rely on these tools too much if they haven't already developed the skills? Because if you already have the skills, you can use it alongside and it's a nice little interaction like that. But if you're starting out trying to learn something and you find a kind of easier way to do it or a less labour-intensive, less time-intensive way to do it, then you might just rely on that and not develop your own skills. So we're really interested in doing that. I think it's good points by all the others as well. A lot of people feel like it's cheating, especially a lot of people that I talk to, they don't know how they should use it. There's a lot of guidance from Sheffield Hallam, but it doesn't feel right, especially with most people looking at chat and GPT, they think it's basically, it's kind of plagiarism because it's not your words, it's kind of the prompts and there's a big grey area there, especially how people feel about it. So yeah, there's definitely a lot to unpack in terms of does it affect student's abilities? Do people think it's cheating? Do people's opinions on it actually change how they use it as well and would that disadvantage them? So I'll pass back to Sue. That's wonderful. Thank you very much. And Alex is also involved in a project, aren't you working with the Learning Centre? Do you want to talk a little bit about that? Yeah, so essentially we've been, we did a research project over summer to have a look into how students are using or undergraduate students can use chat GPT to structure an assignment. And again, it was looking at their iterative process from start to finish and we found really that, again, one of the main, well my main take was to mention was the issue of them not being trained and the problems with hallucinations. So again, a student had found and they'd looked for literature that they could use within their assignment and they'd found that there were these 10 papers that they could reference and again, you search the papers and the papers don't exist, they're not there. So again, I think that's something that definitely needs to be addressed to some extent. And that's actually forming the research chapter in a book that Peter and I are co-editing with two other colleagues about the use of generative AI in higher education. That's a Cedar Outlet book that will be coming out early next year or spring-spring of next year possibly. So over to you, the audience, for some questions. I noticed there's one already from Lucy Caton. She's the new lead for the Centre of AI in Education at the University of Bolton. So you've got a big job there, Lucy, but you're asking for recommendations on how would you make effectively manage lots of different collaborations and research projects with students and across institutions It's a big, big question. In terms of Shaffield Hallam, we've got the Student Engagement Evaluation Research Group and each year we actually employ students as researchers looking at lots of different things in relation to learning and teaching and assessments, but some of these have been linked to artificial intelligence or generative AI in particular. So, having that pool of students is one way. Another way is the funding each year. We have some funding small pots of money to give academics the opportunity to undertake some sort of research and we do encourage that with students so they can apply for that. So that's a couple of examples. Do you have anything you'd like to add Louise or Peter that you're aware of? I think the things that I'd add to that is working with your student unions and indeed we do have a number of student consultants that are employed within the institution and students as partners as well. So any of the existing sort of student partnership schemes that you might have in existence already and also the other thing I would just add is being just sort of advertise what you're doing and share what you're doing openly and you never know what sort of coincidences and serendipity would bring us together and Paolo and Alejandra are two examples of that of two students who approached me because they saw a poster about our research and they've got a number of initiatives underway already just because of that. So the more you share stuff it's not that you need to organize and collaborate across the board with everybody but if we set up networks where we're kind of in contact with each other and we're all kind of members of various networks and keeping in contact and sharing what's happening I think that collaboration and cooperation is key to this but we can't organize it from above it has to be quite grassroots level. I think that institutions that have already gone quite heavily into the students as change agents and co-creation are probably at an advantage here if they make use of it but again that notion of how these initiatives get permeated across the institution because I know institutions I will not name where a really solid initiative never crosses the boundary between the departments. I was talking and that whole question of consistency between staff and departments which has come out of the presentation so far is I think really important. I was talking to a student the other day whose tutor was telling the students not to use chat GPT having just told them how they used it so that kind of elements of our hypocrisy maybe that's in the we've got a problem that's just appeared in the chat from Rosa Sadler how aware do you think students are of the bias issues in AI tools you folk obviously are your colleagues can I say something about this yes about what Rosa said from what I could hear from my peers I think they are not very much aware of the bias issues as well as they are not very much aware of characteristics specific from each AI tool for example I'm sure they don't know much about chat GPT 3.5 not having access to internet or being actualized only until 2021 because I have heard them wanting to use it for their assessments and I really think that this is something that could be also added into a guide for students to know how to use the AI tools like what each AI tool can do for you the characteristics of each of them as well as the possible bias you might find have any of your institutions really gone that far to create a general guide or do you know of any that really have cracked that actually and my colleague Alejandra and myself are working with Ruiz and the department of teaching and learning enhancement I think it was to develop a guide for students and even teachers to use for AI tools so what's the space Luiz yeah just like trying to get our experiences as a student and trying to shape them in a way that might be useful for everyone at the university because then you Sarah Purcell in the chat said brilliant comment about needing to know your topic to make effective use of AI and certainly that in a sense that's been my experience of getting to grips with this technology is it the more you know about the topic the better use you can make of GPT and all the search terms and stuff Luiz do you want to say a comment on that yes because I think one of the things that I've encountered when I've had staff who've been recently sort of introduced or experimented with generative AI tools it's because they're experts sometimes they go in and they put in a couple of prompts and what they get back is rubbish and they walk away going oh that's fine that's nothing to worry about it's not any use to me but it's probably not any use to my students either but they've got that expertise that kind of concede through the problems with it that maybe students or less expert users would not have a really long comment in the chat from Mary Jacob about an opportunity to redesign learning could we open Mary's mic and let her tell it herself is that possible doesn't look like it just give us a second it will take me a minute to give her the permission so just give us one sec okay I'll start reading her comment on the chat opportunity to redesign learning to focus on task based learning outcomes rather than content based ones so if a student should be able to carry out a task it's just a focus away from product towards process this can help avoid the risks of using AI as a proxy for learning what do the students on the panel think this could change the teaching and the learning and the assessment Mary do you want to add to that yes thank you very much Peter I think this is possibly a really good opportunity for us to do that redesign and especially if you can work with students as partners to redesign not just the assessment but the whole learning design of a module so instead of turning to these tools to produce a thing that you then submit but you didn't maybe in some cases students didn't really do the work or they didn't do much of the actual work really a learning outcome is about what you can do in a meaningful way so I'm able to write a computer program of a certain type or something like that and that might be using AI as a tool but I have the knowledge to do it I'm capable of doing this task and that might help to demotivate people from using it in a way that might bring academic integrity risks but actually help them to develop for the workplace so we're just wondering what the students in the panel what you guys think Frankie do you want to kick off and then Alessandra? Yeah no I think that's a really good idea I've always been massively in favour of changing how you educate and how you test stuff to teaching skills teaching people how to do something instead of what it should look like when it's done because you can then apply that and change it around in your own life and stuff and I was diagnosed with ADHD quite late in my A levels which meant that all through school and stuff like that the system just doesn't work for people like me and for a lot of people who don't have anything like they're not neurodivergent or anything like that it just doesn't quite work you know this whole system of it's changed definitely at uni now but GCSEs and A levels in this country they're all to do with memorisation unless you do a course workplace course and those are really rare nowadays so it doesn't really prepare you for what you need in life you don't get those critical thinking skills you don't get anything like that and you get those risks of you're looking to output something and it gives you the wrong thing and you don't know why you don't challenge that you don't do that sort of thing so yeah definitely all for more skills based learning instead of outcome based learning Hello Andrew Hi so yes of course I also agree with Frankie said I'm also really all for skills learning instead of just outcome and I think that this is a concept that has already been implemented before for example whenever we had any kind of exam that was open book exams or works that back in the time when we started using internet in the uni in universities I do still remember a time where using the internet as a resource to do your coursework or what not was still something that was still being debated so I do believe that this is kind of the natural flow of AI tools because as it's also been stated previously they are here to stay I don't I don't see a way where we can just ban them out completely from university I don't see how we can remove them so I think that the best we can do is try and find a way to use them to our own advantage and also teach the students to use them adequately so that's my take on that Anybody else want to chip in on that Alex? Yeah I would like to I'm proud of you now Alex Yes besides of what my colleague have said I would also like to say that I think it would be really nice if assessments could focus more on providing you the tools and the practice to check what you need to know or understand or learn new things because I think that's very important usually in most fields if not all of them what you know because you have studied for an exam at university that would not be sufficient and you will need to be open to learn new skills and to prepare for them so I think it's very important if we can learn at university how to do that and I think that AI tools can be taught as a good way to support that new learning Alex any comment? Yeah in support of new learning I'm completely in favour of say in terms of the programming it would have taken me years to get to where I am in terms of my proficiency I simply couldn't have done it otherwise I'd not got the access I'd not got a close friend who can programme to the level that I need to be able to now so you really have got that on hand personal assistant that's there 24-7 so yeah I'm absolutely all for it Yeah And I think if you look at the literature coming out of the organisational surveys of how many organisations are now they may not be fully fledged in going to AI but they're certainly dabbling with it and so the more you know before you get out onto the employment market about what this technology can do and more importantly what it cannot do and what its limitations are and how you can use it effectively then the better I'm not sure what time you have left soon Sorry about that I just had a coffee and fit We finished finished it to 10-2 Okay so we've got another 5 minutes Just picking up on questions of bias have come up again in the chat the idea of I don't know whether one comment there from Jason Williams I enjoyed debating topics with Google Bard and I wonder whether any of you have used that kind of to-ing and fro-ing using one of the like Bard or one of the GPTs as a kind of debating partner Anybody Yes I use Funny enough I use an AI tool that mostly for fun that's all character AI and you can create your own character and I created a fictional classmate that knows more than I do about data science so I usually debate with it the ideas I may have about my own project and it keeps suggesting me stuff I could do or different alternatives in which I could test something and most of it it's just garbage but some of them are actually useful it suggested me one time a call that I'm actually going to use for my own project and again you really need to know what you're talking about because you need to read that call, you need to think okay that makes sense I need to check it, you need to check it on the appropriate website and you need to understand the call and the arguments and if you have all the data you need and if you can make it work and then you can test it on RStudio and if it works then it's fine but still it's a very fun way and also a very useful way to have this kind of peer partner colleague that's an AI and you can just discuss topics with it because even if it's just suggesting stuff that you need to and that doesn't really make much sense even that way it can make you think of another way you haven't thought before and say okay that doesn't make sense but actually this other thing that I could think about thanks to that suggestion that's useful and I think it's also very interesting way to do the learning and support I've also heard or read that it can be useful for interview practice you can put in the job description and the competencies, skills etc that are required, desired and ask it to prompt you with some questions that might come up at interview so having that practice given the scenarios for various different skills could be helpful and Louise you've just put a comment in the chat give us an academic reference for it you want to talk about that for a minute the zone of proximal development yeah I just did that idea the proximal development of an idea of a more knowledgeable other being the way of bringing your students knowledge up to a particular level so it's quite an old theory but it's a goody, it gets rolled out but it's just embodied exactly in that of the kind of relationship with the generative AI that you're using there so it's really good I wanted to go back, there was a question in the chat from Kathy Peter, it was about what what do our colleagues here on the panel think about guides and training that students might need what might that look like actually what's interesting to know about okay, so what sort of training should you have yeah, Frankie? I think it'd be good just any kind of training really because the guidance that there at the moment, especially this is Sheffield Hallam it's very accessible but you need to go and find it, you need to be the one that takes that initiative there's nothing in we have adapt sessions which are academic development personal tuition, there's nothing that's been mentioned in those about it or anything like that that kind of seems like a perfect way to talk about AI, talk about its usage and kind of remind students of the guidelines that the uni has and how to follow those properly so I think just integrating it, not even as a separate session or just putting it into the things that already there the familiar settings that students have is probably a good way to do it so it's not something you have to go and find it's kind of shown to students yeah, and I think actually the previous session from Tarsen was a perfect example of how you could bring generative AI into a class session so that you both got the general lessons out and you got the kind of subjects and it then moved into the kind of subject specialism and it got university guidelines, I mean a session like that on every course in the country would be a real step forward I would say any other thoughts thinking about open badges and you know sort of doing something that's extra or co-curricular I guess really rather an extra curricular but yeah, trying to design it into a module for every single course is possibly the ideal way but comes with its challenges and it also links to what Catalina mentioned you know staff are already really really busy overworked etc, when are they going to have time to take up the training themselves or the trainings produced and it comes back to that allocation of time for staff development which everybody in any role needs to embrace and undertake but how can we best tackle this, I've often said that it should be actually timetabled into or working weeks, not necessarily every week but there's an opportunity there so that everybody has the opportunity to take part in it because quite often you know that again is self-directed and I've quite often seen things all that looks really interesting but I'm already busy doing other things because that time isn't spare at a given point so couple of other issues that have just come up one about are you a student worried about cheating with AI from Louise's work it doesn't seem to be that's an issue there but how important is it and the second about environmental impact whether that's come out as an issue that students are worried about pick up either of those if you would in terms of the cheating dilemma it seems to be a lot of the people that I've spoken to seems like a moral question of a lot of people feel it's almost like superiority now in not using these things and being able to think I guess they said more critically than than others so yeah it's and say what determines ethical use again I'm not particularly sure so it's I think that's something that needs to be there could be some sort of general consensus of that that would be fantastic but I mean that's almost an impossible question isn't it so I think we'll have to end there and I'll just reiterate what Louise has said and others is to continue openly sharing the things that you're finding out because we're all learning students are learning staff are learning but together you know we share this information we can learn from you know what's what's not going well but also what is going well take that forward so I'd like to thank everybody on the panel including our wonderful four students have been incredible thank you very much for participating and thank you to Peter and Louise as well and over back to you Natalie thank you thank you so much that was a fantastic session and I think it may be highlights doesn't it that actually students are looking for guidance on how to use AI critically and ethically but actually in some ways I think what we've heard from today's students is they're actually streets ahead of many of the lecturers that are teaching them and that is one of the challenges that I think maybe we face I think today's been a fascinating series of presentations and I was struck by an image in Mary's session where she had that sort of tangled like a tangled ball of string almost and we've been looking today at the sort of ethical aspects of using artificial intelligence in teaching and learning and we've kind of taken different routes through that ball of string and different lenses on all of this and the thing that I'm kind of left with is can we find and can we develop sustainable approaches that actually bring staff and students together in partnership to sort of work our way and disentangle that ball of wool as it were and develop that kind of partnership working to develop guidance to maybe explore interdisciplinary learning to look at some of those ethical conundrums that AI throws up a bit like Tarzan was explaining in his case study and how can we develop scaffolding locally that's also contextual to the different disciplines because different disciplines bring different challenges and then how do we make sure that we support ethical and inclusive practice so I think today's almost to me has been like the start of a conversation that we need to continue and there's almost that sense of we should come back next week and kind of carry on the conversation but maybe that's something we need to look at how we facilitate that but those are just my initial thoughts Sharon I don't know what your thoughts are Thanks Natalie I was trying to sort of jot down a few notes from the absolutely fantastic sessions that we've had today and quite apart from the initial notion of principled refusal I suppose one thing that has really made sense to me today I think is around just hope for the future and I think that that really shone through today and then as you said the need to have these spaces for conversations it came up in a number of different guys throughout the day I think Helen was talking about conversations we were talking about debate we were talking about reflection and then I think towards the end there Louise put it very beautifully talking about networks and collaborations and I think something that she said really meant something to me that it has to come from the grassroots and not from the top down so I think that was just lovely and I think that those are the key messages that I'm going to take home with me but I agree we need to have more of these conversations but not next week I've got too much on Absolutely it's Christmas isn't it and just thank you everyone for coming along today and thank you very much to all of our speakers I think the quality of the presentations today just absolutely superb and lots to really take away and reflect upon and also lots of practical things that we can actually take and apply to our own practice which I think is always a real bonus isn't it to come to something like today put time aside and think you can take things away and actually start to apply them in your own context so thank you very much and I also really want to thank Kerry and Katie and the old team who've been working fearlessly in the background trying to keep things going and I know it's been maybe a little bit of a difficult day with the technology but thank you Kerry and Katie for supporting us all and helping to make today happen I have to say that I've been so busy in the run-up today that I really just rocked up I'll be completely honest I'm sure you could tell it but yeah Kerry and Katie and all the team working so hard behind the scenes Thank you both and thank you so much for co-chairing and just to say that we'll be sending so the recordings from today you'll already find a lot of them in the platform already so on the main agenda page if you scroll to the bottom to the section called media you'll see that I think everybody's up to I think TASMs now are in there and the rest will be made available shortly after maybe a bit of delay for the closed captions to be generated but they will appear eventually so do take a look at those and then any of the speakers that have shared resources or shared their presentations with us will send those on to everybody that's attended over the next couple of days so you'll get everything from today shortly and later in this week so just from us and the alt team a huge thank you to everyone for coming to all of our fantastic speakers who, as Natalie and Sharon said have done such a fantastic job today and especially to our students this afternoon who did a fantastic job not only dealing with their own technical issues getting into the session but then also answering so many questions and dealing with that so well so thank you all so very much that's it from us and from the alt team and hopefully we'll see you in another alt event very very soon take care everyone and I shall disappear and allow Natalie and Sharon to finish the day Sharon's disappeared so just to say thank you very much again everyone and keep an eye out for further events from old in the new year just a reminder too to think about potential case studies that you might be able to submit to alt around applying the alt ethical framework and again there is the alt award at the alt C annual conference for the best case study around that so get your thinking hats on and think about what you're doing locally that you can maybe submit to that or just sharing those case studies I think that's what we really relish isn't it the ability to see what our colleagues are doing in practice what we can learn from that and how we can share our own context so thank you again and with that I'll pass to Sharon so I'm just going to say goodbye happy Christmas bye