 This is the hardest part waiting for the last minute. Yeah, I know. So welcome back everyone. Hope everyone's had a chance to have a little bit of a break. Grab yourself a coffee or tea. We're going to continue now. And I'm really delighted to be welcoming Mary Jacob from Aberystwyth. University Mary holds Seamult. She's a senior fellow of CEDA, the Higher Education Academy. And she also got a PGCTHE. And she's a lecturer at Aberystwyth where she coordinates the Generative AI Working Group. And I'll have to say a big thank you to you, Mary, because I know that you were very early doors, getting your guidance upon using AI. And I know that certainly we at Dundee very much made use of that. So thank you for that. And yeah, Mary's primary responsibility at Aberystwyth is actually leading the postgraduate certificate in teaching and higher education. And she's probably, I think, famously known for her weekly curated list of all the events that are going on in the ed tech world, which I think is an amazing feat. And I know that we share that with colleagues again locally. So I'd really recommend following Mary on Twitter, or as it's now known as X, and also on Blue Sky and keeping up to date with that fantastic list of all the events that are going on. Anyway, I'm good to be quiet and just say, delighted, Mary, that you're going to be talking to us today about how we can develop resilience in this ever-changing AI landscape is kind of changing from day to day almost, isn't it? So a very warm welcome. And thank you for coming along today. Thank you so much. And it's such an honor to be here speaking at the Alt Winter Summit and following Helen Betham. And I just want to say that everything she said I agree with. So, but I don't think that I will be exactly repeating everything, but yeah, very much on board with Helen's message and the message of this whole conference. So today I'm talking about developing resilience in that ever-changing AI landscape. And I created this image for you using an AI image generator called Night Cafe. You can see I've just made the credit up here at the bottom of the screen. That's kind of an example of something we might do as educators with their own students, to say this is how I made it. I didn't hand draw this, but I did use critical thinking and evaluate the different products and tweak the prompt and had a number of iterations. I personally feel like I might be on one of those floating islands there and we're sort of moving about, the horizon is misty. We've got this technical thing that's sort of dark, but maybe some possibilities here as well. And I think that represents the way where a lot of us are feeling. So now I'd like to get a feeling from you about where we all are now. And I'm gonna switch to a VVox poll here. You can, there should be a link in the chat if you prefer to click a link or you can scan the QR code and just say what your concerns are at the moment. And we'll put this in, you could just have a word or a short phrase. Yeah, academic integrity, bias, regulation, lack of guidance, privacy, overhype, I like that from Helen's talk. AI proof evaluation, equity, class barriers, lack of direction, pace of change, missed opportunity. Yeah, overwhelmed. Yeah, that comes up every time that I ask this question. Yeah, loss of trust, commercialization, fear, digital divide. Right, we've got a few more people. I'll give you a moment to put your answers up there as this is changing. Loss of skills, that's an important one. Yeah, interesting to see equity as the dominant one so far, fantastic. So I'm gonna close this one and go to the next question, which is the opposite. So this is, what are your hopes for AI? So that's, those are the fears and concerns. What are your hopes that might come out of this that are positive? None, efficiency, consistency, supportive, time, learning opportunity, more creativity, it is possible. Literary frameworks, that's really literacy framework, sorry, inspiration, democratic knowledge, openness, improved workflows, yeah. Really interesting that some of the things we're afraid about on the, you know, concerned about on one side can also be an opportunity on the other side. Productivity, efficiency, I like the creativity and the inspiration as themes coming through on these. We'll give the last couple of people a chance to post your word and then we'll go back to the PowerPoint. Yeah, time is the top one, efficiency, creativity, in that highest level of what we're hoping for. I think there is hope and I was really happy to see Helen close her presentation on that word hope. So thank you very, very much for your responses on that. I think that's a pretty typical kind of mix of feelings and it gives us that unsettled feeling, but there are ways we can cope with that. So the way I see it from where I sit, our challenges include navigating between that extreme apocalyptic panic on the one hand and uncritical enthusiasm on the other hand and that commercialization, you know, sometimes when you spot that uncritical enthusiasm it's because somebody is representing a company selling a product. Well, so we will take that with grain soul. We need to manage the risks and take advantage of the benefits and develop AI literacy. This has already been mentioned, both for I'm coming from the position of an educator as well as, you know, I have many years as a learning technologist, but I'm, you know, an educator I would position myself now as a lecturer and AI literacy for educators and for students and supporting students in learning both effectively and ethically is really important. So what is resilience and who needs it? I like these pictures because the woman is strong and the things I don't know what they really are, they could be Pringles, but maybe not, they're flexible. So strong and flexible. This is the definition from the American Psychological Association of Resilience that emphasizes process adapting to difficult or challenging experiences, being flexible and making adjustments to those internal and external demands. And I think that's what we're all facing. I think on the higher level, institutions need to be resilient in this way, students need to be resilient in this way, and staff in our various roles need to be resilient in this way. So it is a challenge. So in the age of AI, I think this resilience requires, obviously being like a bendy straw, but maybe something that's biodegradable and not made of plastic. So I think this requires being flexible and adaptable. We need to, in this current age, things are changing so rapidly. We almost saw open AI completely collapse, but then it didn't. Five days later, they're back. They've got rid of their board that had the ethical concerns. So we need to be able to respond in a very agile way to emerging situations. I do think that critical thinking is absolutely essential for us and for our students and our institutions. And then as individual human beings in the world, we need that emotional regulation and supporting ourselves and others. And this is also a lot of what Helen was talking about in her first talk about dialogue and community and support. So I think our situation now feels to me sort of like this tangly knot of wires. How do we untangle this? What can we do? And I think that the first thing is to recognize the limits of what we're facing. There are no easy answers. I talk to staff almost every day who want an answer, like, okay, a simple yes or no kind of thing. That doesn't actually exist. The situation's much more nuanced than that. We have those big changes on short notice. I wrote this before the whole thing that happened a couple of weeks ago, which was described in a New Yorker article as using a rude term, which I might not say out loud, but the first word of it is cluster and the second word starts with F. Anyway, where Sam Altman was fired and then hired by Microsoft and then brought back into open AI and they didn't lose all their staff in the end all within about five days time. So we don't really know. If something could happen tomorrow, we can't predict. We need to be ready for that. AI is being embedded everywhere. It's already embedded in a lot of things that we use every day. And as a result, we as individuals have little control unless you're going to completely boycott these massive global corporations that we use in our daily work. I don't know that's feasible. Then we have to find a way to cope within that, within the givens and the constraints that we have. There are high stakes. As an educator, one of the highest stakes that I think about is student learning, but they're all the equity and as well as environmental impacts, those are high stakes as well. And as a result, they're high emotions on all sides. And we need to acknowledge that and deal with that and come up with our ways of coping through resilience. So I'm gonna give you a little case study based on Aberystwyth University, but I'm not saying that we are the best or we did the best that we could and a lot of other institutions are doing a great job. And I'll direct you to some resources from other institutions at the end of this. But here's how we approach it. So the first thing was to try to navigate that middle ground between the extremes of panic and uncritical enthusiasm. And so when talking with other staff members, senior management, students, try to keep other teaching staff keep in that middle ground that immediately allows that tension to calm down a little bit. We work very hard to coordinate efforts across the university and I'll show you about our generative AI working group and how we do that in a moment. So it's not just one small team, but it's a coordinated effort from people in different positions. I think it's absolutely crucial that we listen to students as well as listening to staff. And again, this was the thing that Helen was talking about very much. So it's not coming in and doing a top down everybody's got to do it this way. It was very important for us to identify priorities. So when the generative AI working group first met, we said, okay, we like to do all these different things. What's the most important thing to do first? And then we focused on that and we were able to work very rapidly. So the group, I proposed a group in January. It was formed at the beginning of February. We rolled out our first guidance to staff at the beginning of March. I'm very proud of my colleagues. We knew we couldn't solve all of the different things, but we said, what's the most important thing at the start of term? What is the best advice we can give to staff? And it was a shared project. It wasn't just one person saying this. So the other thing that can really help us to be resilient at the institutional level is to use, adapt and share materials. So I learned a lot from people in the sector. I'm very grateful, very grateful. And use, adapt and then share back. And that's the ethos I think that again, Helen was talking about, about community and collaboration. So you don't have to build, reinvent the wheel for everything. If somebody else has something that's great and they're sharing it, you can use that or you can adapt that and put your own material in the public domain. And then once you have this guidance, I think it was really important for us to communicate through multiple channels. And I think that it was still a challenge for us to get the message across to all staff. Some staff weren't ready to hear the message in March, but eventually we did. And I'll show you some of the things we did in more concrete detail soon, next slide or two. So at the institution level, we created the generative AI working group which looks like this, although there, I mean, I'm not really in the middle, I'm just the facilitator, the coordinator. So we report to our academic enhancement committee, which is a university committee. And the working group members include our academic integrity officer, represented teaching members from each of the three faculties we have. We have the student union educational officer. Oh my goodness, am I so grateful? We had the last year and the new one, they're both so enthusiastic and add so much value to the discussion. It's absolutely crucial. We have student support staff in the working group, library and academic development teams, registry and acceptable academic practice staff. And I just sort of coordinate the communication basically. So we draw upon expertise from the sector and this isn't a QAA, JISC, other organizations and we create guidance. We suggested policy wording such as a change to our unacceptable academic practice regulation and we train staff and students. So that's what our working group does. So here's what we've done so far. And I'm again, I'm very proud of my colleagues. I'm very grateful. So materials that we created with academic engagement team, librarians, big props to librarians here and information skills, the guidance they had input into the guidance for staff, they had input, created some of the video guidance for students and they created a lib guide, a library guide, which is called utilizing AI in the library as students guide. And they gave the working group a page in that four page guide. So we put our videos and we put our links in there together with the material created by other librarians in the university and that's a good example of joined up thinking and collaborative working. So there's one place for students to go and there's one place for staff to go, one webpage for staff and one lib guide for students and it links out all the other material that we have. And then in conjunction with this, I'm part of the learning and teaching enhancement unit. We're academic developers and learning technologists. And so we run training sessions for staff and over the summer created a discussion forum including students and staff so that it's more what is your experience and what questions do you have and what suggestions might you like to pass forward to the working group? Those are the three items on the agenda for the discussion forum. So the other thing I'm so proud of my colleagues in March, we got the unacceptable academic practice wording changed to say presenting work generated by AI as if it were your own. So it's not a banning it, it's not banning it and it's not ignoring it but this is what would be considered unacceptable academic practice. At our institution, we did try the turn it in AI detection tool. We decided to turn it off over the summer because of the unreliability and as again Helen pointed out already if students want to escape detection they can easily escape detection but it also does generate false positives and that can be very traumatic for a student to be accused of cheating when they haven't cheated. So we decided as a university to turn it off and I'm really happy about that. Our longer-term goals for the working group that we're currently working on are reconsidering assessment, redesigning. We've got, I've created a workshop using the just postcards with your crowdsourced. Again, use what's out there. Amazing materials from the National Center for AI and tertiary education. So we will be running that in January and we have started our discussion about AI literacy trying to define it for our context for both staff and students. But that piece of work isn't complete yet. So I thought I would show you a couple of slides from the guidance for staff which is an interactive session and the guidance for students which are the videos. So this is an example, it's not the whole thing but this is an example of risks and caveats that we talk about for staff. So I think first off generative AI creates plausible untruths, hallucinated citations. Students might bypass the learning process. I have a lot of people in the, some people in the comments today in the chat we're talking about that academic integrity issues. We can't provide conclusive, no tool can provide conclusive evidence that the students use AI. So I don't think it's worth our time and effort to try to find one. So this is just a sample of the guidance for staff. So those are the risks and caveats and these are the positive things that we can do. And this goes to transparency, how we can talk to our students, making expectations clear, what's acceptable, what isn't and I am going to add a recent development that it took a little while but our departments have started producing their own context specific guidance for students for their department and they have been willing to share those. So we have a good practice sharing folder and all of the directors of learning and teaching have been given access to the sharing folder and so more and more departments are creating department specific guidance for their particular type of assessment and learning needs. So really happy about that. So we want the students to talk to their departments and we want the departments to be transparent with their students. And at the bottom here I am very keen to encourage staff to devise a learning activity where appropriate to help students to think critically about AI and to use it both effectively and ethically. And there's a lot of crowdsourced materials with great guidance for that kind of thing. So here's our page in the LibGuide. So this is the student facing guidance. So this isn't a live session. These are three very short recordings that the working group made as well as a lot of other material created by the librarians. So the three videos are practical guidance on using it that include citation, that kind of thing, consulting your lecturers, so making sure that you know as a student what is okay for your own department, for your own assessment, your own module and then don't lose the learning. So make sure that you're learning from this and not using it too as a replacement for learning. So here are a couple of screenshots from the videos. So you can see how these align with the live session for staff. So, you know, false information, oversimplified or biased output, missing out on learning. And then this is the slide. This is the slide, to be honest, as close as to my own heart is about putting the learning first. So we've got four pillars to that maybe if we call them pillars. Understand the reasons behind the assessments. Why would this help you learn? Why are your lecturers making you do this? Writing an essay or doing a laboratory practical or, you know, do a lot of reading and summarize something. And then if you are using AI, improve on the output. So that's the critical thinking. Is this the best answer? Are the facts accurate? What's left out? Does it need more detail? And then the third one is don't rely on AI. Evaluate sources written by humans and find authoritative texts and take all of this together to make your decision about what is what. So synthesizing what you learn from multiple sources. So this is an example of the guidance for students. And then this is from another one of the videos that is emphasizing, again, that being transparent, using it ethically, check with your department what they want, how they want you to do this and what's acceptable, what's not acceptable. So it's all about integrating this stuff together but it's not taking a black or white approach. It's taking a more nuanced approach that it requires that critical thinking. So that's what we did at the institutional level. And I also wanna say some words about individual resilience, how we each relate to each other in the context that we're working and all the different ways that we have contact and interact with others in similar roles or different roles from our own. So I think the human element is absolutely crucial that we keep that in the forefront. And that means that we notice our emotions. So some of you might have heard of the amygdala hijack basically when our brains go into fight or flight mode, it's like escape from a tiger, right? Here comes the dangerous thing, we're being attacked. Then if we feel that way, we feel really threatened or in an unstable, unreliable environment, then that can dampen down our critical thinking because we're responding from that emotional base. So finding ways to calm down that reaction so that we can think critically is going to really help. So if we notice our own emotions and when we're talking to other people, we notice their emotions say, okay, are they in that fight or flight mode? Are they feeling that panic? How can we calm down from that so that we can make the right decision and recognize it's maybe not black or white, but somewhere in the middle? And that means bringing that kindness and compassion into play and just being real, being our authentic self, acknowledging that we're all different. So my authentic self and your authentic self might be different, but we still find that ground for kindness and compassion and support each other. And in this current situation, it does require that tolerance for uncertainty and imperfection. So not only every human being, I can guarantee you, we are all imperfect, it's not possible to be perfect. So okay, let's just be real, but also the imperfection and the uncertainty in our environment that we don't have that control over. So to try to find that place of resilience and strength to sort of elasticity, to rebound from this. And I think I don't know about you, if anybody in this session today does any kind of mindfulness practice, could be meditation, yoga, going for a long walk, in nature, or in the middle of the city, whatever it might be that helps you to calm down. When we do that calming down, like I'm just doing now, you can hear it in my voice, you could see it in my body language, and if you are in a panicky state, hopefully that will help you also to come down into that more relaxed, mindful, aware state, where we can also do that critical thinking, but without panic. That's a tip. Can't be perfect in it, and I panic too, but I found this helpful. So one of the things we were talking about is AI literacy, and I think that staff need this so that we can teach it to students. If we don't have that, then we can't teach it, and we can't prepare them for the workplace where AI might be required. And again, Helen talked about so many things that are directly relevant to what I have to say. This will look different depending on the role that somebody has, if they're in a creative industry, or if they're in a business environment, advertising, or maybe doing something different. So I don't think that it's easy to define exactly what AI literacy is because that could change tomorrow if something new emerges on the scene, but here's what we think so far at the moment. This is what we think. So one of the things is to choose, to make informed choices where we can. So we know the strengths and weaknesses of these various tools and approaches to using the tools. I do think we need to be aware of data protection, and that includes how the tools were trained, all of that material scraped from the web without the permission of the human writers who wrote it. We don't have much control over that, but we might be in a position where we can make an informed choice. The reliability of the tools, the equity factors involved there, and that includes some of the abuse of labor that we heard about. But if we, it's not possible, I don't think for those of us in education to 100% boycott all AI, I don't think it's feasible. So we need to make the best decisions. We can be aware of the things that we might disagree with. And when we do have an opportunity to make a choice, then we can make that choice. The second one is to critique. So this is a critical thinking, critique the output. Is it true? Is it biased? Is it incomplete? Is it overly generic? I think these are characteristics that I'm seeing in a lot of the output, regardless of the platform. So that's for staff and for students to really critique that output more than we might do if we were even looking something up on Wikipedia, because it could have dramatic falsehoods in the middle of very accurate information, all mixed in together. So it takes time to do that. So then the third one is to use it ethically and effectively. So on the ethical side, we're talking about academic integrity, we're talking about being transparent, how we're using it. There also is an issue that hasn't been mentioned today yet, but is very much on my mind, which is the minds of others, which is the sustainability of this as an industry. So there's environmental impact in the mining of the material for the chips, the manufacture of the chips and cards, the running of these massive data processing centers that have a huge carbon footprint. There's a lot of diversion of resources, such as water to cool this, in the manufacturing process to cool down the equipment, diverting that water away from the population would otherwise drink or bathe in it. And it's going to factories. And this is something we need to think about. We are perhaps as a human society is behaving as if there are unlimited resources, but actually there aren't. There are limits to the resources and this needs to be factored in. And then in terms of using it effectively, besides the critical thinking, which I've already mentioned in the middle panel, I think we need to decide for any particular task, is it saving time or not? If I'm fact-checking, am I spending more time fact-checking than I would have just writing the thing in the first place? Maybe getting that 10 sentence summary compared to reading the whole article might be beneficial, but do we really need to read the whole article? Maybe we do need to read the whole article, or maybe that 10 sentence summary can be an entry into reading the article. So we read this simple summary and then we read the real thing. And it helps us to understand the real thing. I think that's a possible way it could be used for good to help students learn, for example. But I think we need to be very critical, not only about the output, but about whether this is increasing our productivity and is worthwhile or not on a case-by-case basis. There isn't a simple answer for that. So there's no easy answer for any of this, but I've got these four tips. Keep it simple and prioritize, be kind. That's bringing in the human dimension. Use what's already out there so we're not reinventing the wheel all the time and think critically. So this is, we will be giving you a PDF with all of these resources. These are just a sample of some of the things I've found that are really great. Of course, I did put our stuff there on the list so you can find that. But the Just National Center, QAA, Kent Kings College has wonderful things, the Russell Group, Cardiff University and Sheffield-Hallam has wonderful things and Texas down in Australia. This is just a starting point. So I will share that with you. And if anybody wants to contact me or follow me on the platform, formerly known as Twitter or Blue Sky, you're welcome to do so. And maybe I'll click through the link on the roundup. So you can see the roundup if anybody hasn't seen it. I need to do one so you can see it's got events. It's got a section for resources on artificial intelligence and other resources. And I hope you find it useful. So I think that's me for the moment. Thank you so much, Mary. A lot of applause there for you. And if anyone has questions for Mary, just please do put them in the chat and we'll put them to Mary. The thing that really struck me, you covered a lot of ground in that presentation, Mary. I was just reminded of the term pedagogy of kindness that's doing the rounds at the moment and that sense of resilience. And we talked at the start about the fact that the ethical framework was developed to improve it. And there was a sense that resilience is really important there. And we've kind of jumped from that to AI. And that sense of kindness is really important, but it's that sense of, as you're highlighting, that we need to encourage staff and students to really understand this. So I just wondered in terms of socializing that guidance, that being aware that people do feel overwhelmed, have you found any approaches that particularly work there? Well, I think we're running the training sessions and the forums, discussion forums once a month. So, and we're also reaching out to the departments and doing things on a bespoke basis there. I think the way I started this session is similar to what I do in those sessions. So how do you feel right now? And all of those feelings are valid. And even if they're contradictory and one person might have multiple contradictory feelings, that's valid. And I think that's the first thing that people can hopefully recognize that they're not alone and we can support each other. I had a conversation at a recent discussion forum where we talked about leaning on each other. We can all lean on each other and support each other through this. And I think that is immediately that there's a relaxation that comes in with that, even though it's still challenging. Thank you. We've got a question from Kathy McLaughlin who's saying, what AI training are you providing to staff? And is it on particular tools, prompt engineering, et cetera? Yeah, just a curiosity there about how you're approaching the training. Yes, we are at the moment, we're not doing training on prompt engineering or particular tools, although we are exploring the Blackboard AI tools. And VVox, I use VVox and we've explored the VVox tools. And so that guidance on how to use VVox or how to use the Blackboard tools critically are, we'll be part of our training that we offer to staff in the tools themselves. But I think that there are too many different possible tools and it's constantly changing. I didn't feel that it would be at this point in time that we could really do that. And again, going back to what Helen was showing about how the human input into evaluating the output and tweaking the algorithms, these things are constantly changing and the night cafe tool that I've been using now for a year and a half for image generation is changing, the algorithms are changing so the same prompts don't generate the same type of output. So I think it's a moving target but the critical thinking we can bring into play. I think also thinking about the platform is this a platform that we think is reasonably reliable or just something you found out there who knows how they're using the data. We're inputting prompts that those prompts are going in to feed the algorithm. How are they using personal data that we might be putting in? Very cautious on using those kinds of external things but things that we all have to use they're part of what we offer in the university anyway. Those are where we're focusing. Thank you, Mary. Another question here from Moira Mayly is there an AI function that mimics your mindfulness moment creation? Oh, that's a really good question. I think we should create one. I mean, yeah. But the thing is what's powerful about it is having it be a human being that's responding to you rather than a chatbot. And I'm extremely skeptical of any psychological chatbots. We've seen some cases where it's supposed to be about to help people who maybe have an eating disorder or body dysmorphia, but a person who's suffering from that maybe they're anorexic, they can use the chatbot to generate guidance on how to get even thinner. And that's really harmful. So I would not recommend going for any psychological chatbot. That's my personal recommendation. But I think sit next to somebody and do that. Yeah. Thank you, Mary. And I'm just thinking too, you mentioned critical thinking a lot. Helen touched on it as well. It's something we've talked about a lot actually when we talk about educational technology and graduate attributes. And it's something we've been sort of trying to tackle for years as is the kind of whole assessment issue, isn't it? Redesigning assessment. So I just wonder again, in terms of strategies to really help students to really think about and develop their critical thinking. There is that sense, students use social media. We maybe try to raise some of the issues and things they need to be thinking about. There's a whole sustainability. We've got a generation who are very concerned about the sustainability and the climate even to the point that it's causing anxiety. Again, just a question, how do you see that maybe we can bring that into teaching approaches to help students think more critically about all of this? Yeah, really, really good question. I have tremendous respect as I hope I communicated already for librarians and for their information skills and their specialized knowledge. I think that in higher education, this has always been important, but even more so now. So developing that ability to read something and evaluate who is the author? What is their agenda? What is their evidence? How does this fit with that and that and that? How do the different, how does this fit into a certain context? And I think we have the ability to teach those things. There are experts out there who have that specialized knowledge and I think it's really valuable if lectures and teaching staff can direct students to that and incorporate that into our classroom education with one activity. Maybe you take 20 minutes or half an hour in class where you're developing that ability, thinking critically about some AI generated output. What are the flaws in it? How can it be improved? How does the information stack up? Is it biased? Does it say, oh gosh, we're getting all white men when we ask for pictures of people from countries around the world. This kind of thing and bring that into play so that they will be using it but they could use it in conjunction with other sources where they validated the authenticity of the information, the authority of the sources. There's an interesting comment actually in the chat from Maria Walker who's saying, advocating using AI as a ladder and not a crutch. I think it's a really nice visualization actually thinking of it in that way and using it to enhance and unlock your own capabilities rather than as a substitute. Absolutely. Yeah, yeah. So I think we're drawing to the end of this session and Mary, I just want to thank you again for giving us an insight into how you develop things that Aberystwyth, the approaches, I think there's lots there that we can take. And thank you again just for your generosity in sharing your weekly resource list and also the handout which we will share with all registrants. And just a reminder, there've been a few questions about whether we're recording all the sessions today because maybe you've missed bits of it so just to highlight that, yes, all the sessions are being recorded and if you missed Helen's session you'll be able to see it after the event. So thank you again, Mary. Thank you so much. We will see everyone again at 12.15. Thank you so much.