 So welcome back, I think people are still starting to join us and we will kick off in a minute or two. Okay, well I'm delighted to kick off the next session, so welcome back everyone, and we're going to be exploring the whole issue of achieving inclusive education using AI. I think inclusivity and inequalities are certainly one of the issues that the ethical framework really does try to address. I'm delighted that we've got Otterlandu Duruwodu joining us today to speak to this and Tunday is, he's a reader in education management and he's the Associate Dean for Diversity, Inclusion, the Faculty of Businesses and Law and Tunday is also a very active member of the old SIG, Anti-Racism SIG, so a very warm welcome Tunday, delighted to have you with us today and really looking forward to seeing how you're going to cover this really, really important topic of inclusivity and AI, so welcome. Thank you, thank you very much Natalie, thank you. Thank you very much everyone for joining us today. I think as Natalie said, yes I'm from Liverpool John Moores University and I'm the Associate Dean for Diversity and Inclusion and part of that is looking at how we ensure that us as a faculty and all LGME as an institution is very, very inclusive in what we do and AI presents a unique opportunity and we've been sort of looking at how to explore using AI as a means for achieving inclusivity. So what I'll do now is very quickly just share my presentation and then we can kick off, so just bear with me a second, I hope you can see the presentation, I'm hoping you can see the presentation. Thank you, we can see that perfectly. Yeah, oh brilliant, thank you, alright so my topic today is around achieving inclusive education using artificial intelligence and I used a statement, a disadvantage, where at doubt because I believe that when we talk about inclusion, one of the things that we need to address are the disadvantage risks that is involved and if you look at the picture in that PowerPoint slide, it's a picture of a young girl who's competing in the vertical jump competition and what you see first is that young lady's been sort of measured for their height and their reach and then once they've been measured, they use that height and reach as the baseline for measuring the actual jump performance. So the lady then jumps and then the actual jump height is then measured. Now what they've tried to do in this picture essentially is to identify what they consider to be a disadvantage risk and that is height and reach of the competitor because they believe that there are different competitors with different heights and different reach. Therefore if they are actually interested in measuring the jump performance of the competitor, then it's important to eliminate those disadvantage risks. And the same analogy is what I will sort of use when we're talking about inclusion within education. So what are those disadvantage risks and how do we level off those disadvantage risks in order to ensure that what we are actually performing, what we're actually measuring is the actual performance of the students. So there isn't any disadvantage and we ensure that there's equity and equality in what we are measuring because what we find right now in most high education institutions is what we do right now is we measure disadvantaged performance rather than just measuring the actual and the true performance of students. So we need to be able to address this issue. So the problem is how do we then you utilize AI because AI is an emerging tool that has been used in different fields to address what they consider to be important challenges. So in the medical field for example is being used to address you know in medical diagnosis it's been used in the physics world has been used in the creative art industry again to sort of enhance a creativity of art artists. I mean physics world is being used to solve difficult and hard problems. Now there's a perspective that AI is an important tool to address those important challenges. The question then is what do we see AI as a tool for addressing what sort of problems within high education. So AI really potentially can be a very useful levelling up tool but at the moment it seems that we're still not looking at it from that sort of perspective. We're still not viewing AI as a tool to address a specific unique challenge for us. But what I'm talking about today really is around how we use AI to address the unique challenge of inclusion and of course you just need to read literature and you need to just visit some of those discussion spaces and you would very quickly realise that yes we've got an inclusion challenge within high education and some symptoms of that challenge are things like a warding gap where there is a huge difference between the proportion of white students who achieve 2-1 and first class compared to non-white students who achieve the same degree outcome. There's a huge disparity in there and not just that another symptom is in what we consider to be the student experience for example the experience with regards to assessment or even with in terms of their employment outcomes. So there are gaps in existence. The question is how do we ask the university or as high education sector or even further education sector how do we address this issue. So inclusion is an important challenge and we need to be addressing that. Now currently within high education and further education what we currently see is that we've got different applications of AI. We've got chatbots, it's been used as chatbots, it's been used within assessment grading systems to ease the sort of marking load of academics and tutors. It's also been used as an adaptive learning systems. It's also been used in the way of analyzing huge volumes of data. But of course we currently have two classical problems with the current AI. We've got a design off problem and we also have a design for problem. So with regards to the design off challenge these are some of the things that we already know. We know that we always say we're careful. We tell our students to be careful when they're using generative AI because there are repeating biases in the output of those generative AI and that's because generative AI is replicating the bias because the AI algorithm is being trained by data sets that were generated and in those data sets we know that there are biases in them and if that is not addressed what AI will then do will be to replicate the existing biases and of course operators within this space within this market are looking at ways to address that repeating bias issue. There are of course ethical concerns and again you just have to listen to the news and you'll see what's happening with regards to AI regulation and AI ethics. Bloodlines of course between fact and fiction because as we as what itself entails generative AI means it comes through the interneting comes through the data set and then compiles what it believes to be the best answer to the question that you post to it and we know that it gets all sorts of data from all sorts of sources and you can't really tell whether what is presented is actual is actually fact or is actually fiction. Also we also have what we call the weird lens problem where most of the output you get from generative AI is based on weird lenses and weird of course is the acronym used for Western educated industrialized rich and democratized societies and if you if we use charge GPT as an example charge GPT-3 as of 2020 had I think around the 92% of all the data that is used in training GPT-3 92% of that came from English and the next language to that was French and that was around 1.8% and you can see a huge disparity so we see that it's very very English focused and if that's the case then where is that diversity of knowledge that we we hope to get with AI. So these are some of the design of challenges that we know currently exist and of course we know that there are still transparency issues you know where is the data coming from and how are they you know using those data you know hopefully not in a malicious way and hopefully I think if you listen to the news just a few days ago the EU eventually came up with with the regulation for AI and hopefully maybe that will start to get people get people to be a bit more sort of open and willing to use AI because the adoption rate worldwide right now for many businesses especially is around 50% and of course you can also look at the breakdown of you know by by country the willingness to adopt AI and also whether they trust AI or not and you can see the different spectrum where different countries exist within that within that line and you can see that for example the UK at the moment I think the willingness to accept AI still stands around the something percent and of course we still have a long way to go. So if we look at some of those design off challenges of course it's something that we need to be mindful of when we're using AI. We also have what I call the design for challenges okay and this really looks at what we're using AI for within higher education for for instance for example when we when we look at where we apply AI we mostly design it for the majority okay and if we look at the UK context that would be the white majority okay and it's important for us to understand it also in in relation to the minority part of of the student body. Also when we're never we're purposeful about how we use AI it's usually for a very limited number of protected characteristics and the one that we typically use it for is around disability and also neurodiversity and that's that's a good thing all right however we also need to be mindful that they are all that protected characteristics that also warrant our you know purposeful use of AI and that's some of the things that I would like to sort of focus on today. So often we ignore intersectionality we know that an individual all right might have several disadvantage risks which we need to be mindful of but the problem we currently have is that many of the interventions that we have always has that sort of single focus so it's high time we started to think about the intersectionality of these disadvantages and we'll touch on that a bit later on. Again there's still that perception within high education that you know inclusion means you know treating everybody equally and unfortunately it's still an issue because I spoke with some people just last year and you know some of the things that were coming out of that of those conversations was you know people still think that you know I treat everyone equally you know I can't you know I can't be seen to be doing anything extra now of course the problem is not the issue is not you know I'm doing anything extra the issue is being purposeful thinking about what are the needs of the various student demography within your within your institution and addressing those specific needs that's the key issue here all right and then the last problem that I think or the last challenge that I think we have is what we call is what I call the the academic limitations or the institutional limitations and what do I mean by that I mean that we've got time limitations as academics or as tutors or as educators and we have workload and limitations we've got subjectivity limitations whether you like it or not we are all very subjective in the way that we deal with things and sometimes those subjectivity might not be apparent to us so we've got things around unconscious bias as well. We also have issues around flexibility and we also have issues around lack of creativity and those are some of the issues that we currently face and that doesn't give us the space and the capacity to think about inclusion in a way that we can be a bit more purposeful and a bit more effective in how we deal with the issue because if you've got so much on your on your plate and you haven't got enough time and you you're not allowed that flexibility that is required in order to address inclusion issues then then that issue would remain an issue and it wouldn't be addressed and that is why it's important for us to think about how we lighten the loads in this particular area and AI presents a unique tool that can help us as academics or as tutors or as educators to reduce the burden of time the burden of workload by improving our workflow and also improving our productivity and hopefully with that we would have enough space and enough capacity to be able to be more intentional about how we use our roles to achieve inclusion all right in in our education so as an institution where are we okay can we actually achieve inclusion using AI so where are we as as an institution unfortunately I think we're still not where we ought to be and and this is a provocation and that is most of the policy around AI really is geared towards you know handling students use of AI for example chat GPT so it's either we've got a policy in place that says oh no you can't use generative AI at all or ones that says that oh yeah you can use it however you need to reference the fact that you have used it so there are different interactions with AI again it depends on the specific university but unfortunately again most institutions within the UK and of course around the world still have not issued any policy whatsoever on the use of generative AI and that needs to to improve the second status is around our current view of academics of tutors and of educators we've got this omniscient view of them where we expect them to deal with new technologies as they emerge to train themselves and to address the problem and because yes we do rely on educators but we tend not to think about the tool the challenges that educators are facing for example inclusion that we're talking about it sometimes you've got certain academics that are very active when it comes to inclusion but when you ask others you know it's not an issue for them they don't even think about inclusion all right we we had an interview on staff's perception of them of inclusion just a couple of years ago and some of the things that came out of that interview was that you know some staff don't even think about inclusion when they're designing their curriculum and in this day and age it's important for us to to think about what we're doing who we are teaching we need to know who our students are all right and then hopefully we can be a bit more sort of effective in that way therefore as as as institutions what we currently do is right now we're still very hesitant with regards to AI and that's that's that's understandable but we need to we need to do a bit more and we also need to help develop the capacity of the academic to be more purposeful in with regards to how they use AI and not just how they use AI but how they use AI for for inclusion purposes so as an institution we need to recognize that we've got limitations all right and we need to be able to overcome this limitation so AI can help us address some of those limitations that we that I talked about workload and productivity issues we also need to understand that it takes time and effort for an academic or an educator to be able to get to the point where they are sort of confident enough to be able to utilize AI all right in the learning spaces and of course institutions for example like UNESCO are coming up with framework that institutions can use to help develop that agency within educators and I think it's very important to sort of read up on that and follow through on that we also need to develop a multi purpose in other words we need to use AI that can address several challenges at once and then we also need to use a sort of multi narrow AI strategy so it's not just using for example chat GPT within your institution it's about how you use that collection of AI applications for example chat boards adaptive learning how you sort of use them in a sort of effective and sort of combined way and that's where we need to be as as an institution so if I also ask everyone here today that what is your AI use plan what will be your response will it be that okay right now we're just talking about policies on you know on how we help students to sort of them use generative AI or will I hear something a bit more purposeful a bit more sort of further down the line than that and saying that okay we we've adopted specific technologies using AI and we're training our staff in the use of this AI and so that they can hopefully pass that knowledge onto students and also help students to use AI in a safe and in a very engaging way so that's a question that we need to to address now if you look at the picture on the on the right hand side there and in order to develop an institutional AI strategy there are three things that I think are completely necessary the first one is we need to adopt a utility view of AI in other words AI as a tool okay how can we use AI to address some of those challenges that we've got secondly is we need to recognize the agency costs okay because we've got this reliance on academics but what can we do to help academics to be able to get to the place where we need them to be in order to help address or to further the AI and inclusion agenda within within our institution so do we need to have training in place do we need to pay for training all right and if they're going to go for training do we need to include factor that into their workload allocation for example do we need to make space for them and time for them to be able to to do that so those are some of the things that we need to sort of think about and when we're talking about AI there has to be a coordinate a coordinated view so it's not just about using just one type application or one using AI to solve one task it needs to be solving multiple tasks for us for example we use chat bot for example in engaging students we use generative AI for example in the learning spaces we use adaptive learning for example when students can sort of go away go home and then hopefully catch up with maybe the lesson of the day or help them to learn at their own pace so those are some of the things that we need to think about and it has to be coordinated and that's very very important so how do you develop an AI use plan first thing is you need to understand your context because when we're talking about AI for inclusion remember you are addressing the needs of people all right of humans and you need to know what those needs are so understand your context and then secondly would be to respond using AI so it's a matter of thinking how AI can help you to address some of these issues and that's very very important so if we look at understanding your context what that means essentially is you need to be able to collect and analyze the data that you've got okay so think about the demographic data that you've got so things around gender ethnicity disability religion sexual orientation area of deprivation and even those that don't come under what we call protected characteristics and if you look at OFS for example they've got they come up with this equality of opportunity risk register that can also be useful so I would say that it's important to sort of engage with those registers so understand how other forms of deprivation you know can be factored into what we are addressing as an institution but more importantly we need to understand the intersectionality of these disadvantage risks okay so understand that and then secondly look at your performance data so what we did also at the LGMU was that we looked at the performance of the various student characteristics and we also did a survey trying to understand the experiences of these student characteristics just to get a better sense of some of the challenges that they're facing and in order to then think about this the bespoke solution in order to sort of address some of those issues so it's very very important I would say this is one of the most important aspects of achieving inclusive education is to know where your students are coming from and understand what sort of disadvantage risks they've got going on and then hopefully once you've done that then you can then be a bit more proportional about how you use AI to respond to those operational issues for example how do you use AI in curriculum development how do you use AI in assessment delivery or assessment strategy how do you use AI in outside hours engagement and how do you use it in a sort of adaptive learning format and of course lastly you need to think about intersectionality there's a TASO report that came out recently looking at reviewing two different interventions around decolonizing the curriculum and they found limited evidence that the decolonization the curriculum interventions didn't have much success in addressing the awarding gap that I spoke about initially now there could be several reasons for that but that again it's a pointer towards trying to address a specific disadvantage risk and then and then not thinking about the intersectionality of all of this this advantage risk and how that impacts on the awarding gap and that is why being purposeful and intentional is very very important so what I'll do very quickly now is just go through each of this and talk about give you just an example of how you can achieve inclusion using the curriculum development and the other and boxes that we've got here again these are just examples the point here is for you to think about how you use it in all of this area all right in a very targeted way so for example in curriculum development you can use it to sort of pluralize perspectives on a particular topic for example if I was to I teach supply chain management for example and I also teach management theory so if I say okay I can tell my student to say to use generative AI to come up with you know five different aspects of management theory they do that and then I can then say all right how do we pluralize our knowledge of this topic I can then say all right use specific prompt okay that asks the generative AI to look at the perspectives from maybe for example an African point of view so when you start to do that you start to get concept around things like Ubuntu you know togetherness and things like that from that sort of African and cultural perspective and that just enriches the knowledge of our students okay so that's one way another example would be maybe to ask it to generate a reading list I remember the previous though in the previous session they talked about the use of a library and it's very very important when you ask students to do this it's important for you to ask them to verify and that's one additional thing that they can do you know to improve their learning to say this has been generated by generative AI are these lists actually in existence and they need to be able to go and then verify those those lists and then you can hopefully use those lists so that's just two simple examples of how you can use AI in curriculum development in an inclusive way the second one is around assessment delivery so formative assessment for example or low stake marking and using it as a sort of feedback mechanism for students and what you will find is that this also is very useful for example for international students who are not very familiar with for example some of the criteria that we've got or some of the language that we use in in the assessment brief that we give them so this hopefully would help them to learn more and familiarize themselves with those criteria so an example of this would be you could encourage the students to use the GAI to grade their own submission so you've given them a task grade that submission using the set of criteria that you're using in your module and hopefully once you've done that then they can then start to improve on the work because the GAI can then give them specific feedback and they can then use that feedback to sort of improve their work okay or you can get students to improve the quality of their response through prompt engineering okay and then you can then see how their responses get improved over time okay so that again can be very targeted and it can be very informative and educative in that way and then assessment for learning again you can provide students with a set of marking criteria and get them to mark sample responses so this are different ways of operationalizing that and get them to mark sample responses that you've marked already and you know what the answers are and then they can then interrogate you know the difference between the marks that you assigned them and the marks that and check GPT and hopefully through that they can then see where those improvements need to be made also you can also use it in outside hours engagement one thing we learned from the survey that we um from the survey that we did was that students typically when they've got issues with within their modules they typically don't go to the module leader they go to their peers and some actually also go to their family members in order to address some of those issues so what we're saying is in a very inclusive way and the reason why they don't typically go to many of those academic tutors it could be for fear that if they went and asked the question the tutor might think that they are stupid okay and that might leave an impression on them so therefore they avoid going all right it could just be again the language that we use we always say oh yes you are at the university or maybe you're a master's student now you need to be you know you need to be able to do things on your own and independently so again that also makes them a bit apprehensive about approaching their academic tutors to be able to answer the questions that they have so one way of taking that piece of information to them might be you know through chatbots all right and it could be as simple as just converting some of the FAQs to a conversational AI style and product and it can address things around assessment information it can also help you if you use it in a targeted way and to be a bit more transparent about your assessment criteria and what you're grading and help you to remove the hidden curriculum that we know exists in in in many assessments also it can also help the students to know more about the module all right module information also when we're talking about induction and I think this is a very big area because when students come to us we always give them a lot and lots of information so if we have a chatbot that can help them and to understand and digest the information we're passing to them I think that will be very very helpful it can also be used as a student self-evaluation in terms of writing so for example things like am i right or Grammarly and we know that language is a huge disadvantage risk for many students you can also use it in adaptive learning and I've put examples there for example Cogbook, McGraw-Hill, Connects, Khan Academic etc. Lastly think intersectionality so how do you use for example Chatbots, Genitive AI in a way that addresses multiple disadvantages or disadvantage risks for example example of those disadvantage risks will be English language proficiency so that could be an issue that affects for example the neurodiver students or international students or students whose first language is not English again it can help you can also look at how you address for example cultural gap and you can look at things around anxiety for example for example when you we know that students have higher level of anxiety when they attempt exam types of assessments so how do we use AI to move away from exams also in terms of diminished capacity for example we have students who are caregivers so which means that rather than have five days in a week for studies they've got diminished number of days compared to those who don't have care responsibilities so how do we help bridge those gaps so those are some of the disadvantages that we need to think about when we are thinking about using AI within our learning spaces so where do you go from here think about an intersection you want to prioritize so we're not just talking about one disadvantage risk we're talking about multiple disadvantage risks and how you can use AI to address all those disadvantage risks consider the challenges that we've been talking about and the limitations of the use of AI and that's very very also that's also very very important and of course we need to develop that instructor-student partnership approach prepare yourself engage with your institution diversity and inclusion plan you know participate in reciprocal mentoring trying to understand things around race and also around you know diversity utilize research that has already been been undertaken by different institutions there are several toolkits in existence trying utilize them and see how can I use that and marry that with my use of AI within the learning space and of course get involved with various staff networks and one that I would recommend is the anti-racism in learning technology and special interest group off of ALT and how can you get involved in that you can be involved as an ordinary member or you can report officer positions available at the moment or you can just show interest and to join as an officer and then we'll keep you in mind going forward or you can join us as a speaker we're always looking for speakers around around this so it's a space for a real change and to make a real impact so these are some of the things that hopefully once we prioritise them we can then start to look at achieving inclusive education so I'll stop sharing for now I think that's the end of my presentation um how how are we doing yeah that's great thank you so much Tindall that was that was a bit of a tour de force I think really really important points but also some really helpful practical examples so many thanks for that so we'll go into questions um and the first one is from Maria Walker who's saying can we trust AI not to pluralise perspectives from a stereotypical point of view that she's asked it to write a case study about a fictional African company and it had no depth there was no real representation in the case study so is is there almost you know something we've got to be aware of that we could ask using it in that way but actually there could be drawbacks at the same time yes yeah yeah absolutely right because um when you when you ask a genitive AI to sort of answer some of those questions sometimes you have to be we said you have to be careful because sometimes you you can't separate facts from fiction but what we are hoping will then happen with this student is that it then you know puts that sort of investigative mindset in into play and then they then go and then research some of those information that was sort of turned out by the genitive AI and that's that sort of extra step that you need to to take now what AI can do is that it can bring up different concepts but the student then need to go and then think about researching a bit more around that and the idea really is just to make sure that we're not just teaching them concept from one point of view it's about pluralizing you know the various perspective from that so there's no one perspective that is right or universal all right so it's getting the students to think about multiple perspectives sorry I can't hear you apologies sorry I muted because there's a bit of piss on the line yeah I was just saying yeah and that brings in all the critical thinking that we've been talking about in the earlier session so it's an opportunity then to reinforce developing that AI literacy isn't it yes there's a question there from Chris Rao can AI be anti-racist can AI be anti-racist I think that's where the design for comes into play I mean the design of challenge comes into play so if we are a bit more intentional about how we design AI we talked about the output coming from from this sort of weird weird that's an acronym weird lenses and when we think about those that are involved in the design of AI how diverse is that team all right so when we start to think about how we design AI in the first place and who's involved in the design of AI then we can start to use that sort of anti-racist lens in developing in developing it and I think that's really crucial unfortunately I think I might be putting my my foot into it now is when I think about some of these talks around ethics and I know that's very important and usually inclusion and anti-racism tend to be sort of subsumed under that for me I think there has to be a separation there you have to separate because anti-racism is an important component of achieving inclusion and I think I think a lot of people don't get that but it's important for us to be able to look at it from that perspective and and say that all right where are those inputs coming from how diverse are those inputs remember I mentioned that for example chat gpt and I'm not singling out chat gpt is this is just the only example that I could find 92% of the data set is coming from English 1.8% is from French 1.4% is from German the next language is 0.7% and you can then see all the other languages are less than 0% less than 1% so so if that's the case then it means that you know whatever the AI generates will be basically from from that perspective so where is that diversity of knowledge and I think that's very important so to address that question I think yes AI can be anti-racist but it has to be intentional question is I don't know whether it is at the moment yeah and you've got a couple of minutes left um Tindall I'm going to ask one question I want to ask you a final question if I can but I'll go with this one first um and it's essentially how do we reconcile the issues of AI and representation and its use it can feel like we are being complicit or supporting that would you say the key is to point out its shortcomings rather than boycotting it yes I think um and that's why at the moment we have to use AI very carefully and we need to it's important to understand the challenges that comes with AI or the challenges that AI itself represents and we've talked about that already but it's not enough to then say all right we then dismissed the use of AI completely um there are several surveys that have been conducted um and you can see that um it's been said that AI will become one of the competitive factors for many institutions in the next 30 years all right we can then say okay there are too many problems we close our eyes and we say no we're not getting involved but what we what what will tend to happen is then we then we get left behind and I think it's very unfair on students because AI represents a very very important opportunity for us to level up all right and if we're not sort of using that and thinking about how we can use that despite all the limitations that is got um I think that would be a significant missed opportunity and I don't think we as institutions should be doing that and a final reflection I noted that you know during COVID that some of those inequalities were addressed without pivots online and the blended learning approach and we saw attainment gaps narrow and we saw BAME groups particularly do a lot better and also disabled students and I remember during that time we did quite a detailed an EDI quality impact assessment and I'm just wondering whether is that something that you'd recommend that we should be doing with our communities maybe with that approach that Mary outlined different different you know professional services staff academics students together to really work through that so that we can begin to take that responsible approach to making sure that we adopt AI in an inclusive way yeah absolutely absolutely and I think I think the COVID period was a very good case or a case study for for for this because when COVID hit what we what we did basically we became a bit more flexible and so rather than having in-person exams we started having you know different formats for our exams the question is you know we changed the formats for exams and I know we're coming back because of AI as well we're thinking okay how do we prevent students from cheating but what COVID forced us to think about was how do we do things differently how do we accommodate many of these challenges that our students are facing but what we then find is that post-COVID you know some people want to go back to the way things that I've done and especially when AI sort of creativity hit the market and many of the responses were that oh no we had to go back to exams you know and you just have to look at the historical performance of students on exams you will see that as a clear gap okay based on ethnicity based on on on race and we can't go back to that so it's very important for us as institutions as colleagues to sort of come together and think all right how can we use AI just as we did with regards to COVID how can we use it you know in a very effective way to address some of the challenges that our students are known to face so it's important to know what those disadvantages are and how we can use AI to remedy them or level them off. Thank you again that's been a really insightful talk as people saying in the chat really appreciating hearing your perspective and the practical examples you've given so thank you very much and that is the end of the morning session we reconvene at 1350 and maybe just wrap up by saying do think about the old ethical framework we're really looking for case studies and I'm sure that the anti-racism learning technology SIG is also similarly looking for case studies you know and there is an old award around ethical practice so think about this as you're developing your approaches and your institutions and the kind of case studies that you could be sharing with the community with the broader community because we're all in this together we're all facing the same challenges we've all got that challenge of time and space we need our collective thinking don't we in collective perspectives so I'll leave that as a final thought but thank you again Tundi also to Helen and Mary our other speakers this morning and look forward to seeing you later on thank you thank you thank you for having me