 So, I think we'll get started. I hope that you have enjoyed this morning, plenty to think about. And we are looking forward to another couple of what promised to be really excellent presentations this afternoon. And I'm absolutely delighted to welcome Tarsam Singh Kooner to the stage. Tarsam is an Associate Professor of Social Work and Director of Social Work Programs at the University of Birmingham. His recent research has been funded by the Economic and Social Research Council. He has been instrumental in developing innovative digital learning approaches to help social workers explore how to navigate the ethical issues of using social media in social work. And so he is an ideal person to be speaking to us this afternoon. And I love in particular looking at the program that we've had today that he's going to be talking about a particular case study that he's been working on. So how he's been using a particular framework in class with students and with his first year social work students. So Tarsam, as you can see from the chat, people are really interested in hearing what you have to say. So without any further ado, I will hand the stage over to you. Thank you Sharon. Good afternoon everyone. I hope you all had a good lunch and looking forward to the presentation. This is more of an applied kind of thing where I'm going to outline a story pretty much about a couple of issues that we came across as a teaching team and how we address those. Particularly with a focus on AI integration and getting students to ethically reflect about that AI integration into their social work studies and practice. So my aim is to demonstrate a learning design that focuses on preparing students and to actually get you to consider how you can use a similar inquiry based learning approach yourselves as well. So even though the topic that we're going to explore is social work based, I hope you look at it as a framework and think about, well, how can you use your own disciplines approach to mirror that this framework and use specific elements of learning to progress your students, learning and thinking in your own specific field. So I want to tell you a story. Basically, this story is the fact that we are becoming more and more aware as social work educators that AI is already being used in the practice field by practitioners and by students as part of their studies as well. And this has become apparent because practice educators who teach students on placement are telling us that their students are using AI uncritically. And practitioners in terms of the research that we're doing out in the field, managers, other practitioners are telling us that they're using AI to write, for example, assessments, reports and so on. And it's only when we start to ask people, well, why is it that you're doing this? What are the benefits? Are you aware of the pros and cons of using this approach that we kind of getting an understanding that there's a degree of awareness that things such as AI bias exist. But practitioners and students aren't really delving into this in any great detail. And so because of this situation, what I decided to do was actually create a learning, well, a teaching session for our students just to explore how we could try to get students to think about the ethical issues involved in terms of using AI. And so in terms of the presentation this afternoon, I want to give you a little bit of context to the teaching session I used. I want to demonstrate to you how I used a digital trigger to look at issues around AI bias. And what I'll do is I'll share the links in terms of the resources that I've used so that you can use them in your teaching too. And then I want to demonstrate to you how I implemented the peer framework and I'll go into this in a little bit more detail. I also carried out a little evaluation with the students and I want to show you what the outcomes of the teaching were. And I hope that this all leads into a discussion where we can explore the approach that I used in a bit more detail. Now, some of the elements that I've used may be a little bit controversial. But I'm happy to kind of explore those more with you during the discussion. So in terms of the teaching context, I'm a module lead for a BA1 social, for a BA1 module. And so I had BA1 social work students and the module is called social work skills values and approaches. The day that I undertook the teaching, I had 30 students in class. And it was a two hour teaching session. Now, the main part of the delivery element of the teaching session was about half an hour to set the context for students. But the remaining hour and a half was pretty much hands on with students using AI in the classroom. And I'll explain how I did that using the peer framework. I also carried out a pre and post teaching evaluation focused on basically the learning objectives. Now, out of the 30 students, 25 completed the forms. And the age range of the students was between 18 to 20, 21 to 25, with six students leaving this section blank. But I can tell you that most of the students were under 25. When I asked them whether they used AI in their studies, this result actually surprised me. Only seven students said that they used AI tools like chapter GPT in their studies. 14 stated clearly that they didn't. And these were anonymous. This was anonymous feedback. So there was no way that the students were aware that we didn't link their responses to their names or anything like that. So they were encouraged to be as honest as possible. And four students left it blank. And this did surprise me because I was under the impression. And I don't know if you feel the same that there seems to be this impression out there that more students are using AI tools like chapter GPT than appears to actually be the case. Now, I don't know if this was an anomaly, but it'd be interesting to get your views on this, your own experiences. So this was the situation at the start of the teaching, just to set the context. In terms of learning objectives, by the end of the teaching, I wanted the students to be able to analyze the ethical considerations involved in using AI in social work by assessing its integration and how it aligns with professional values such as social justice, equity and inclusivity. And I also wanted the students to become aware of the concept of algorithmic bias and its potential to impact AI driven social work environments. And to get them to start to think about proposing strategies and identifying ways of mitigating bias in AI applications. And thirdly, I wanted them to become aware of how they can avoid academic integrity issues by understanding how AI can and cannot be used as part of their social work studies. And again, I'll explore this in further detail with you. So what I did was right at the beginning of the session, I put all the students into small groups, maximum of five students per group. And in small groups, right from the beginning, what I got them to discuss was, well, what is artificial intelligence? What are some of the opportunities and challenges that AI tools can bring to social work education practice? What does algorithmic bias mean? And should you use AI tools like chat GPT and BARD in your academic work? All we did was we had a little bit of a discussion about their responses. And this helped me to gauge where the students were at. And the rest of the teaching basically evolved around this context setting. And to begin with, what I did was I started to look at to help the students to look at the ethics and how they converge with the principles of social work. So I drew on the principles of reliable AI by the European Group on Ethics and Science and New Technologies. And what I started to do was look at, well, what are the principles of producing reliable AI? What is it that the European Group actually says about this? And I drew out five elements that have been produced by these. The link to the chapter is at the bottom of the presentation. And they talk about respect, that good reliable AI in terms of principles should have respect for human dignity. And this links to the International Federation of Social Workers Principles in terms of working with service users, a recognition of the inherent dignity of human beings. In terms of the AI, individual freedom, respect for human autonomy, promotion of rights to self-determination, respect for democracy, justice and the rule of law, promotion of social justice and respect for diversity. The point of pulling this out was to demonstrate that if we enact social work principles effectively, then really AI, if it's developed reliably, should help to converge the two elements together. But also to draw out those times that actually these principles of reliable AI may not exist. And how can we find out that that's actually the case? And so what I did was, it was serendipitous really. One morning while I was getting ready to go to work I noticed this five minute Sky News clip about the impacts of AI in the real world. And so what I did was I went to Bob National, and again I'll put this link in the chat box at the end. I went on to Bob National and I pulled out the specific clip, the news clip. And before showing it to the students, I asked them to think about, well, what are the real world impacts of AI? What are the examples of bias that you can see in this video and how can bias be addressed? And how can this type of bias have an impact on social work practices? So I put the questions for them right at the beginning. Then whilst they were watching the video, I wanted them to make notes as it went along. Now, I don't know how well this is going to work here, but I'm gonna just show you a couple of minutes of this video just so that you can get an idea. And if I was teaching another subject, I think I could potentially use this video to pull out the wider issues around AI bias and then think about refining the next step that's specific to my teaching topic. And so I'm just gonna show you a couple of minutes and I apologize if the audio is not too great. Now there are concerns about the use of facial recognition technology to catch criminals. It's thought the artificial intelligence being used by police is demonstrating biases when it comes to finding culprits because it's based on real world data. We're joined now by our technology correspondent, Artie. So Artie, tell us a little bit more about how this bias is, how common it's in the system and how it's what kind of impact it could have. So the problem comes in the fact that AI models are trained on real world data and there's an inherent bias in this because of where the data comes from. So if you look at the most popular social media platforms for example, Facebook or Instagram, people from America or European countries are typically overrepresented in these data sets purely because of where these platforms have been popular over the years. The problem is when you then use the results of these models to inform decisions and you have to be careful to not entrench the problem. So say that you're a construction company and you're working on autonomous vehicles using AI to develop them and you want to make sure these vehicles can recognize any type of person so they'll be able to stop before they get to them on a construction site. It's obviously very important in this situation to be able to address the bias in the models. So we spoke to some companies that have been looking at solutions to the problem but even in these cases, there are problems with those approaches. Let's not make the show in front of your wife, because you are under arrest. This is Robert Williams. Three years ago, he was arrested on his driveway for a crime he hadn't committed. The reason? Artificial intelligence mistook him for this man who was suspected to have stolen thousands of dollars worth of watches. Robert was released after 30 hours in custody but the experience has had a lasting impact. So I'm just going to move us on from there because the video is four minutes long and what it does is it draws out many of the elements that we've already talked about in Helen's presentation, in Mary's presentation and in Oll and in Tundie's presentation as well. And so what I wanted to do is set the broad context in terms of AI and how it's having an impact in society. Then what I did was I narrowed it down and you can do this in your own teaching to your specific topic. And what I did was I drew on the British Association of Social Workers codes for ethics, for social work and looking at issues around professional integrity, being trustworthy in the use of AI, ethics in social work, the ability to work ethically, promoting social justice, challenging oppression and challenging unjust policies and practices and what this video did was it triggered discussion around these specific elements because the students could see what, how the use of AI actually led to quite a lot of injustice for the people involved in that video. And then what you can do is actually make it much more specific and this is something that really worries me in the field of social work. As you're probably aware, in child protection, for example, social workers have the power to remove children from their parents if they find situations in which the children are at risk of harm. And there are now, there have been tools, for example, that people have been using to go upstream. And the way they describe this is that they're going to try to predict when children are at risk before they actually fall into the category of risk to try to prevent them, to try to provide support to families. But what's happening is people are raising concerns around this because should you be using algorithms to predict risk and apply services in those circumstances? And what Cathy Ashley raises is the fact that actually we're all influenced by individual and professional experiences and by society. And this includes potential racial and class biases. And what she says is really quite important that the data going into the machine is not benign, that there are influences such as prejudices that people write in their reports and so on, that then the machine picks up on and the responses can end up being discriminatory against particular families. And so this is something that really worried me because there are people using AI now uncritically to think about writing assessments and Tundee raised quite a few issues in relation to the bias in the system in terms of the data sets that are being used to generate reports. And so this is the type of thing that we wanted the students to become aware of straight away. And again, machine learning and children services a summary report that they actually undertook an experiment with four local authorities where they try to use machine learning to identify children at risk. And some of the key findings were that they didn't find any evidence that the models they created using machine learning techniques worked well with children in the social care, worked well in children social care. And that on average, if the model identifies a child is at risk, it's wrong six out of 10 times. The model misses four out of every five children at risk. And one of the models performances exceeded their pre-defined, pre-specified threshold for success adding that the information extracted from reports and assessments didn't improve model performance. And their analysis on whether the models were biased or as unfortunately inconclusive but there is a low level of acceptance of the use of these techniques in children social care among social workers. So the point of showing the initial Sky News video was to set the wider context and start to then narrow it down to demonstrate to the social work students that actually AI is having a significant effect potentially could have a greater effect in practice if we don't think about how we can address this effectively. And so moving on from the context setting what I wanted to do was actually narrow this down a little bit further to something that the students could actually relate to. And this was about, well, should you use AI tools like chat GPT and Bard in your academic work? And what I've done is is I've created a set of YouTube videos. And again, if you're not too aware of how chat GPT works, you can use these videos just to set a context for your students. So I'm just gonna show you about 30 seconds of this video. Hello, friends, and welcome back to my channel. In this video, we're going to explore what is chat GPT and how can you use it to learn the fundamentals of academic writing in your studies? Chat GPT is a specialized variant of the GPT language model. It's been designed specifically for chat bot applications. GPT or generative pre-trained transfer. So I'm just gonna stop there, but just to let you know that if you're not too confident about introducing what chat GPT is, for example, and how it can be used in your academic work, use this resource or other resources that actually exist out there to trigger discussion and debate. So setting the broad context in terms of, well, what is chat GPT and how does generative artificial intelligence work? Some of the pros and cons behind it and how it can potentially be used in your academic work. What I did was I drew on then the guidance from the University of Birmingham, pretty similar to how Mary demonstrated the guidance offered at her institution. Then what I did was I went through this with the students prior to actually implementing the framework. So the students were still in their groups and what they did was they looked at the guidance from the university. And then what I did was I implemented the peer framework and the peer framework has been produced by Professor Aikam. And it's based on an article that he wrote called, are your students ready for AI, a four step framework to prepare students for a chat GPT world. And I have to admit, I think this is fantastic. It's worked really well for me and I would encourage you to at least read the article and think about employing this framework in your own work. So the first step is obviously I got my students working in small groups. The first step is to formulate the problem, identify the core problem, its components and its constraints, then select a suitable AI tool, explore and identify the most suitable generative AI tools for your problem. Then the interaction, interact with the AI tools, experiment with different ways to interact, critically evaluate outputs and integrate them to tackle the problem and then reflect on the experience, evaluate how the generative AI tool helped or hindered problem solving, reflect on your feelings when collaborating with generative AI. So what I'm gonna do is I'm gonna show you how I implemented this four stage model. The first thing in terms of setting the problem, I made it clear to the students, right? I've linked this next section to something that they can relate to which is assignment writing. So let's do the hands-on exercise in small groups. And what I wanted them to do was explore the topic in terms of the challenges for UK social workers in supporting asylum seekers post-2020. And what I asked them to do was choose a specific area within this theme, such as social work, legal and ethical challenges. And at this point, before we went any further, I got them in their small groups to just create a handful of research questions based around this topic. Then once I did that, I employed the second part of the payer framework which was to get the students to select at all. They could either use chat GPT or Google Bard in this case. And so once the students had selected at all, then what I got them to do was select a research question that they'd already produced. And then to use the AI tool to see if they could refine it further by using different prompts. And what was really critically important was that somebody in the group kept a record of the discussions with the AI tool. This helped for reflection afterwards. Now, this is the controversial bit. What I did was I also then got the students after a certain amount of time to get them to use the AI tool to create an assignment outline and to ask the AI tool to base the outline on an introduction, a literature review, analysis and conclusion, and to provide some sources for you to use in the assignment writing. And as they did this, what I wanted them to do was actually discussing their groups. Are the sections produced to generic, incomplete or inconsistent? Should you modify and expand some sections to make them more specific and relevant to the topics being explored? And how could they check the accuracy, relevance, bias and originality of the outputs generated? The idea of doing this was to get them to start looking at the materials being produced in a critical way, to start the discussion and debate about, well, how do they know, for example, that the sources being produced aren't hallucinated, for example, that they're actually accurate? And if they were using chatGPT, how up to date was actually the information being presented, considering that we'd set the post-2020 date in the initial research? After the students had engaged with this activity and it was actually fascinating to watch students use chatGPT and Google BARD. Because, again, there were some elements within some groups where you could tell that it was the first time that they used these tools. In other groups, there was much more proficiency and you could see that these were students who had used the tool, but were starting to critically think about the different elements involved in terms of the work being produced by the AI. And at the end of the teaching session, what we did was we had a discussion, well, first the groups had a discussion amongst themselves and then we shared this in the wider group. And what I asked them to do was actually discuss their experiences of using AI and how it might impact their essay writing process. And I got them to consider these questions. What challenges did you encounter when using the AI tools and how did you overcome them? In what ways did the AI tools enhance or hinder their problem-solving process? And how did using the AI tools affect their understanding of their topic and their arguments? And we also moved on to other elements such as, well, what emotions did you experience while interacting with the AI tools? There was frustration, there was joy, and a number of other emotions as well. What ethical issues or dilemmas did they face when using the AI tools and using the university guidance? And again, I drew them back to the university guidance page. How much of this material did they think that they could submit as their own work? And we had this large group discussion and we started to explore and started to raise questions and debate the different elements and their understandings of what the university policy was. And absolutely at the end of the teaching, I had to make very clear the academic misconduct element of the university guidance and to make clear that the code of practice stipulates that they couldn't use the output of generative AI, i.e. the content it creates in any assessment to less explicitly authorised by the tutor as it was in this case, although not for the assessment obviously, and that this meant that breaching the codes of practice in the submitted work that generated by these tools or incorporated into their own work without explicit permission meant that they would fall foul of the academic misconduct element. And I did have to make them aware that penalties could include having to withdraw from the course. And so in terms of developing this, it actually, the teaching worked really well. That there was a lot of discussion and debate. And what I did was I carried out an evaluation at the end as well. So what you'll see here is in blue, the pre-teaching session responses and in orange, the post-teaching responses. And you could see that to the question, I feel confident in using AI tools like chat GPT in my studies. To begin with, the blue responses, you can see that there wasn't really that degree of confidence, but it started, it got better after the teaching, although there were some outliers here as well. And then in terms of, I can identify some of the ethical issues involved in using AI in social work settings. You could see that at the beginning, a pre-teaching session, we had seven and seven who strongly agreed or agreed, but there was seven and three and one in terms of the neutral disagree and strongly disagree. But post-teaching, you can see that the responses to this question were strongly agree and agree. And I understand the concept of algorithmic bias, pre-teaching, you could see that there were 11 that disagreed, but post-teaching, we had a higher score in terms of those who agreed. And I'm confident I can lessen the effects of AI bias in my future social work role. And again, there was a movement, as you would expect post-teaching to being more confident in terms of this approach. And I understand how I can correctly use AI tools in my social work studies to avoid potential accusations of plagiarism. There was a neutral here, but you can see that given the starting position, the learning did increase as we went along. And this was important because we have, and I'm sure you have had the same situation in that there's been a number of cases that we've had where students have submitted work that has led to academic integrity outcomes because either students weren't aware that they weren't allowed to use AI, or they were using AI and basically had been caught doing so. And so by raising this issue here and exploring how the code of practice in terms of the academic work and looking at how the tools can be used, it actually helped to clarify what was allowed and what wasn't allowed. But what I enjoyed about doing this session with the students was the level of debate and discussion that we had around the use of AI because to be honest, a large number of the students were already using it to begin with. And what we're hoping is that this triggers that discussion and debate and critical thinking when the students move towards producing their academic work and also in the future in terms of their future practice as well. And we will build on this teaching as the students move into the second and third year of their studies. So that's me. I tried to make sure that I remained within the 30 minutes. I will put the resources in the chat facility right now. I do have a YouTube channel where I've got a specific AI and education section and you can use the videos in that section there if you want to, if you find them useful. My email is there on Twitter, I'm on Attica.ly65. And when we share the presentation, I'll share the references as well. I'll just pop the resources now into the chat box. So I hope that was okay. Thank you so much, Tarsam. That was absolutely fantastic. And I can see the little clapping hands symbols coming up there. And I have to say perfect timing as well. I think you were spot on half an hour. Thank you. I was worried I had too many slides to begin with but there you go. Excellent. So I'll just invite people, if you'd like to ask any questions, please do use the chat or put them into the Q and A. There was one question that came up quite early on from Denise Hock. And I think it was when she was asking, when you were talking about the initial survey that you'd done with your students and how many had used AI tools and how many hadn't. So she was asking what date and what year of study was it? So it was a first year students and it was November this year. First year BA social work students and November of this year. Which is what surprised me actually because I would have thought that the students and it was anonymous. So there was no way that it could be tracked back and made that absolutely clear to the students. But I would have thought that the number would have been higher, way higher in terms of those students who were using AI but it wasn't the case. And there was a little exchange around exactly that Tarsam in the chat because I think Denise had said that she found that in September, most first years had not used it but from second year up, the majority had. And then there was a little bit of conversation about maybe it depends on the discipline. It could be because I mean, we are a fitness to practice discipline. So the students sign up to the codes of practice in terms of social work as well as the fitness to practice codes of practice within the university as well. And so the penalties can potentially be harsher for academic misconduct and just misconduct in terms of behavior. But I don't think that was actually the case here because what surprised me was that we had a vast majority of students under the age of 24, most of them 18 to 20. And only seven out of the 25 said that they had used something like chat up to you. Okay, yeah, really interesting. Well, can I also make another point in terms of my observations of what was going on in the teaching groups. It was to a degree quite apparent that some students had not used this tool simply because of the way they were interacting with it. There was a lack of familiarity. And you'd have observed that up close as well firsthand. So yeah, really, really interesting. We have a question from Adam Levi. He says, interesting talk, seeing the real world discussions with students about this topic. I was wondering, given that you mentioned that people have been found to use AI, how have you discovered this and or how would you demonstrate 100% certainty that they are using it? Well, you can't, can you? There is no way that you can demonstrate 100% certainty. I mean, similar to what Mary said earlier on, we have a generative AI community within the university as well. And the fact is that these are discussions that are going on at the moment. How do you have that 100% certainty and you can't? I mean, the way we've picked up the fact that students have used AI is mainly through the hallucination in terms of the citations and changes in writing style as well. But other than that, until the systems become way more accurate, I can't see us doing that 100%. We can't say that 100%. Sure. Natalie, you might ask the next question. I haven't had a chance to read it. Yeah, thanks, Mary. Sorry, Sharon. I'm sorry, I apologize about that. Yeah, so Tarzan, next question comes from Claire Hawkins. And she's saying that there seems to be a potential difficulty for studies with a practice element where students may be challenging practice in the field due to the critique they've engaged with in the university setting. And she's asking what are your thoughts around how this might be managed? In terms of what's going on in the field compared to academia, I think that's been a perennial problem anyway, hasn't it? You know, it's something that's existed. I don't know how we can manage that. I mean, we have practice educators who have students on placement. So we're connected with them while the student learning is going on placement. And the practitioners are coming back to us and saying that their colleagues are using AI. And it's that kind of lack of professional awareness of the biases within the system that really does worry me. And also issues around confidentiality as well. And so I think post-qualifying training may well help. But in terms of that tension between what's going on, I think the students have to go out there to change the system anyway, because otherwise what's the point of us training them? The systems currently aren't working. And there is an ethical requirement on social work students to challenge social injustice. And so I would say that where it's been done ethically that we support the students because hopefully they're encouraging that positive change. I'm not sure if that answers the question, but that's the best I think I can respond to in terms of that. I think that's a tension in many professions, isn't it? You see in healthcare professions as well. And it's maybe that challenge, which we face in academia as well, is actually a time to do that staff development to engage in it, isn't it? It's maybe the students actually supporting that staff development in practice. Absolutely. There's a question from Peter Bailey. And he says, he loves the idea of the approach that you use, Pear and university guidance that you ran with your group of students. And he asks, have you run this with several different groups with similar results? No, this is the first time I did it. It was in November because that was the opportunity that I had, but I do want to run this with second and third year students. And indeed with members of staff as well, because I think undertaking this exercise with colleagues who aren't as familiar with AI is not a bad thing to do because it raises awareness and possibly develops confidence as well in terms of thinking about, well, this is something that I'm not familiar with, but I am becoming more familiar with it now. And I wonder, Tarzan, based on this, because this was a question I wanted to ask, but it's a sort of a natural follow-on. What would you change or what would you tweak? What have you sort of learned from the approach? I try to simplify it a little bit more. I think given that I had a two-hour session, I tried to introduce the broad societal element to it, because obviously it's practice-driven because I had to set the context for the students for them to understand why this is important. And then getting the students to actually engage with it in class is much better because that develops that critical thinking element. I don't know if I would have the societal element as a separate teaching session, leading then to the more hands-on because then I think we'd have more time to engage in that discussion and debate. Okay, yes, yes, gotcha. Yeah, definitely. Sorry, Natalie. Now, I have to ask another follow-up question. I wondered, Tarzan, whether there's opportunity, do you see an opportunity to develop this slightly further in an interdisciplinary learning context where you maybe bring students from different, maybe education students, I don't know, medics and social workers, but in other disciplines as well, that actually that might also bring something else to the learning experience? Yeah, because I think different perspectives from different professions. So one idea I have had is using a case study and the case study as the core to hold together the learning design so that what it could do is stimulate debate between the different professions. We're also to look at what the outputs of AI would be if the prompts were discipline-specific and how they create crossovers or tensions between the different professions in terms of achieving a positive outcome for the case study family. That'll be interesting, wouldn't it? Yeah, but it's always that thing about trying to find a room because I think this works well, face-to-face and timetabling to bring the different disciplines together. That's obviously always the challenge, but doing it online out of hours could potentially be a way forward as well. Yeah, another question here from Frankie Wardale saying, did any students express concerns about an AI tool leading to a lack of development of their own abilities? No, because the whole point of the teaching was to demonstrate to the students how they must use their own critical thinking faculties themselves. And so when we were talking about the emotions element, which I think is really important, when we were talking about the emotions element, none of them really outlined the degree of frustration that I thought they would have. That well, what's the point of us doing this if this is gonna do it for us? But actually understanding that it's not accurate. And the one thing that I really wanted to get across to the students and it was interesting in our discussions that just because a machine says something doesn't mean that it's right. And I know it's such a basic thing to think about, but that kind of uncritical absorption of material off a screen just because of machines. And this is where then the engagement with what where does that data come from? How is that data influenced? That really helped that development. And so no, in terms of that question, not for my group, but I accept that it was just a small sample and perhaps others may find different results. I'm not seeing any other questions coming up in the feed. So Tarsam, I'd like to thank you so much for such an interesting session. Really great to see this. I would love, by the way, if anybody has thoughts about how you might use the framework in your own teaching, maybe in your own discipline, it would be really fascinating to hear about that. And I'm sure Tarsam would be interested as well. But thank you also for your generosity, Tarsam, in sharing your experiences and your resources. I'm gonna go and look you up on YouTube immediately. Please do. So much, so much to learn. So thank you very much. There's lots of clapping symbols. Stephen's saying he's thinking of using this in a train the trainer session, which is what you were talking about there about using it with staff. So that's fantastic. That's excellent. And if anybody wants to get in touch, please do my email details there as well. All right, well, thank you very much. And I think we are due to gather again at three o'clock, is that right? I think so. So we'll end there. Yes, next session at three o'clock and we'll see you then. Grab a cup of tea. Thank you. Bye, everyone.