 So good morning, everybody. Thank you for joining us to this UBC students panel. I'm just gonna give you some housekeeping information just to get us started. So please stay muted during the session. Use the chat if you wanna ask any questions, share your thoughts. We're gonna be using as well, the padlet and I'm just gonna put the link in the chat. And thanks Lucas for doing it as well. So you can either ask the questions in the chat or use the padlet. This session is also being recorded. So we're gonna have a video at the end also available for those who cannot attend this session today. So I just want to acknowledge the fact that I'm joining you today from the UBC campus located on the ancestral and traditional territory of the Muslim people. This place has been a place of learning for centuries and our panel discussion is informed today by the talking circles that were traditionally used and continue to be in indigenous sacred ceremonies. They provide a structure where the voice of each person present is given equal weight. Each voice is acknowledged and heard in turn and as moderator, I'll make sure that our panelists have equal opportunity to contribute to the discussion. So welcome, my name is Manuel Diaz. I am an educational consultant at the Center for Teaching, Learning and Technology and I will be the moderator of the panel today. Today we delve into an enriching exploration of how students are using JNAI in their learning. Our panelists will discuss what motivates them to integrate JNAI into their learning journey and the varied aspects driving this adoption and the benefits that emerge. They will address the potential issues arising from its use such as academic integrity, privacy, concerns regarding equity and in access. The panel will examine the critical aspect of evaluating the quality and reliability of JNAI outputs in the context of learning processes. So you're welcome to post your questions in the chat. I'm glad to have my colleague Lucas right today. He will moderate the chat throughout the session and answer your questions as well. So to discuss this topic with us today, I'm pleased to welcome Jacqueline Fong, graduate student, masters of library science and digital study, digital study student coordinator, faculty of arts, William Khan, combined major in computer science and math, Manzura Khatun, graduate student from the English department, faculty of arts, Gabriel Emmanuel Setiadi, the undergraduate students in behavioral neuroscience, faculty of science, Elena Wright, graduate student from the MIT program, faculty of education. But without further ado, I'm starting with the first question. According to a recent KPMG survey, 60% of the students are using JNAI in their studies. I'm interested in understanding the different ways you are incorporating it into your learning. So as a student, how do you use JNAI in your learning and what motivates you to use it? And can you describe specific tasks? So maybe we can have an answer from each of you. Who would you like to go first? I can do that. Yeah, thank you. And hi, everyone, I'm Jacqueline. So one way that I usually use JNAI for is to proofread my own work after completion. So because the assignments in the courses that I'm taking are usually essays or reflective papers. So they are quite long and if I finish them late in the night, then it's quite time consuming for me to proofread the whole thing. So usually I'm using JNAI to be specific, usually chat GBT to proofread my work. And using JNAI can actually save me a lot of time and it helps me to write in a more concise and also clear manner. Thank you. Thank you Jacqueline. Who wants to take this one on? I'm happy to go next. One of the ways that I really like using JNAI is beyond chat GBT other platforms that are JNAI, but also not necessarily producing written outputs. So things like research rabbit is really helpful. I'm a graduate student, I do a lot of research. I'm looking at bibliographies all day. That's my bread and butter. And so it's really helpful to have those kind of tools to help find other readings or research to compile bibliographies. And I also really like that with a lot of these JNAI tools, there are all the animations that connect with one another. So I can connect research rabbit too. So tarot, which is a well-used bibliography citation machine which makes it really easy for me to link all of my research together. And so I really like using it for that purpose. Thank you Elena. Who wants to take this on next? Yeah, I can go next. So I am super jacking. I use it mostly with proofreading, sometimes with brainstorming at the beginning, especially when I have to write a paper and I'll just give the prompts. I have this idea, this idea and the ways to connect it. And then it will give me the beginning to start with. I also use outside chat GBT. I also use majority because at times I have to bring in some specific images that could express the work I'm doing. So in that case, I give the prompt with very specific detail of my work and they will generate certain type of images for that. Thank you Manzara. So William or Gabriel? Yeah, I'll go next. So to compare it to say internet search, what makes it different or perhaps better? I would say first is personalization. You can ask questions that since I have not been asked before and always get an answer, that could be a double edged sword because it could make up an answer. And I think it's also great for coming up with examples. If you don't understand something, hey, create some examples to help me understand this concept. And if you don't understand it, you can always follow up. So it's kind of like a interactive dialogue. But as a student, in the first eight months it came out, I was actually on co-op. So I wasn't like studying. So I saw like the turmoil is causing and I was kind of getting excited to come back into school and start using chat GBT. But I have not used it nearly as much as I thought I would throughout the term mainly because it's a clear separation between the learning side of school and then the application side of school. So in terms of learning, understanding, stuff that I would do before, that's perfectly fine. But I think the mere existence of chat GBT kind of motivates that laziness because it's like having the answers at the back of the textbook. And that's why I kind of want to stay away from trying to not just look at the answer or do things that would not or circumvent the effort needed to learn. So anything that's understanding related. Yep, that's great. But it comes to doing like assignments or applying the knowledge, then I stay away from it. And William, just in case, like which version of GPT are you using if you don't mind me asking? I use GPT-4, but I've also tried like perplexity and local AI models. Thank you. Gabriel? Yeah. I'm glad to share my experience. So most of my experience with AI is with chat GPT and Zoomata AI. These are the tools that I was encouraged to use actually by my professor. And so the way I use it is I use it as like another student or another friend or even a tutor to sort of get a different perspective since my major really revolves around reading and summarizing research paper. So most of my motivation is one to get that other perspective for another student who might understand the research paper in a different way. That's how I see AI. So I was like a study buddy, but I also use it to just like what William said, just to improve my understanding and to paraphrase certain paragraphs or questions to sort of simplify the language as well. Sometimes cause like some research papers might be too technical in their language. And so that's how I use AI in my life. Okay. And just Gabriel, cause you mentioned this is something that your instructor encouraged you to do. So is that something that your other classmates have been doing heavily or did you have a sense that some students were not necessary responding to that sort of encouragement? Yes. So I think once the instructor encouraged it, it was basically used by the whole class. So what I'm really grateful for as well with my professor, Rebecca Todd, Dr. Rebecca Todd, she actually showed us and encouraged us to use it in our discussions of research papers during the class. And so we can see where the AI can go wrong. So that was like really helpful to see as well. And also it was nice to have an instructor who can guide us into how to use AI and they tell us like what are some capitalists around it? And so yeah, it became a really useful tool to be more efficient when it comes to reading like a research paper that's like 30 pages long and things like that. Yeah. Thank you, Gabriel. And I can see a few questions on the padlet. I'll be addressing some of them a little bit later cause some of them touch on hallucinations. So make sure that we give an answer to this one. All right, thank you everybody. Anybody wants to add anything else to these questions? We've got a lot of questions today but you're welcome to add anything else if you want to. So my second question is about academic integrity. So we know it's been a space of contention on university at UBC. And I'm interested in the extent you consider academic integrity in your use of J and AI. And for this, I'm interested in William and Gabriel's response. So who would like to take this one first? Okay, yeah, I'll go first. So I know CTLT or someone provided to annotate AI syllabus language they can put into your syllabus. And that's a great starting point. But I'll just say that you should probably be more specific if we learn anything from prompting is to be specific and give examples. And that makes it easier to follow. For example, if your assignments you allow the use of it but you need to cite it, then how exactly do you want it to be cited? Because that's not a question that we kind of know how to do. Do we just want to say this how you say it or do you want to provide say like the actual chat conversation? So try to be specific in how you want it to be cited. That makes it just easier to follow like these academic integrity rules because it is important. If students don't understand the concept they really should not be passing the course. That's not good for anyone or the university. And I would also say that again, if you make it easy and reasonable to follow then that makes it easier for students to comply. If you say don't use chat with the entire course that doesn't really make sense because it's kind of like saying do not use the internet in my course. That's hard to follow. And you probably break that rule at some point and you broke that rule that, oh no, like I already broke your rules. So try to be specific, break it down. Like do not use on assignments do not use on the finals. Like that's very reasonable. And if you give clear guidelines it's easier for students to follow as well. So maybe a follow up question on that. So how would you like this sort of conversation to happen? Because we always advise instructors to put a lot of instructions and guidelines in the syllabus. And I always assume if students read the syllabus carefully. So if that's something that you would like to have a conversation early on in the class with a prof? Yeah, I would say definitely have a chat in the beginning of the year. Especially if you believe that GENF AI is detrimental to students learning that doesn't mean you should just ban it outright completing the course. Certain things definitely ban it because they need to demonstrate they actually know the material. But rather if you think it's detrimental to say at the beginning of the year like hey you can use chat with the cited but if you are over-reliant on it you're going to fail the course when it matters the most. So that's something you can just talk about at the beginning of the year and that I think discourages people from I guess using it for malicious means. Yeah. Thank you William. Gabriel? Yeah, so I think just to add on what William already said. So for me I was my experience with AI as well since I'm a co-op student and I have had an opportunity to test some AI tools that claim to be able to help with doing quizzes. What the general finding that we found is that because AI is made by humans and humans even with great minds they'll make mistakes. AI also still makes mistakes as well. So with throughout the testing although it is very it's very impressive to see how AI is able to answer most of the questions. When the questions become more sophisticated and more specific that's when it starts to break down and doesn't really become as useful. So just to add on some of the guidelines that William has already laid out for us another thing that I think would be really useful to promote academic integrity is actually for instructors to go over how the AI works. So just like in my psych 365 course in that same course that I was talking about during my intro our professor actually went through and showed us that the AI is really dependent on the input that we give. So she purposely put a research paper that is biased and it skewed the result it misinterpreted the results just to get into the journal. She purposely put a terrible research paper there and when we ask questions to the AI it actually like gave us an answer to sort of like support those wrong interpretations. So I think showing students in the guidelines like showing students how it's not perfect and it should be used as more as a tool rather than like an all knowing sort of like being that can like help you get good grades. Seeing that the how it can go wrong will actually encourage students to still think critically of the output that the AI gives you. So yeah, that's where I will end there. Thank you Gabriel. And it's just a question that I think is perfect for you maybe related to the previous question but somebody was asking in padlets if chat GPT can provide references and as you do lots of research and looking for different articles out there and doing some analysis and review for you. So what can you say about that question if GPT can provide references? Yeah, I think with chat GPT when it comes to references they will definitely look up like through most of the search engineer to provide us like oh, where did this paper come from and where did this idea come from? It's able to do that but that's still also like kind of limited to the version of chat GPT that you use. And also again, sometimes we have to be aware of even the references that they give because it can also be really based on like previous input that you put into the chat prompt. And so if there was a hallucination from the AI that you did not catch because like maybe your mastery level is not that proficient to catch that hallucination to fact check the information that you put out it can actually also give you the wrong reference as well. So that's a caveat that I found as well. Thank you. And before I let you go and move on to the next question there's another question that is of interest somebody asking if you are aware of your instructors using AI detectors to check your assignments. Is that something that you're aware? What can you say about that? So unfortunately with my experience I've only been in a class where the professor does encourage us to use AI and I was not informed at least if the instructors were using AI detectors. So I think this would be maybe a better question for other panelists who might have answers there. Yeah. Absolutely. William or somebody else? I would say my I wanted CS course and then you do assignments, there's some coding. So if a lot of people submit similar work looks really similar, certain writing style then that raises some flags. And of course you can always put the prompt in or try to answer the assignment using that to be and then cross check that. So that's I guess all of it but in terms of these AI detectors not really but I guess someone from English would have to speak to that. Just a question when you mentioned cross check with what? You put the prompt in like you try to act as the student you see what strategy would give and then compare that to your student submissions. And if you hear a lot of student submissions that look the same then yeah. And also if you encourage students to cite their sources then maybe you'll get one student who has decided I use strategy here and you can see students who didn't. Yeah. Yeah, thank you William. Anybody else has some if in your context? In your context? Yeah, I think that actually in the humanities at the moment is pretty you know they are frequently using professors AI detection. It's there is a section of turn it in that has AI detection. How well it works that's a different question but professors are looking at it. I'm pretty sure that it is integrated into turn it in at the moment. And I think students can even see their own mark on AI detection. I think if you put it in and you look at your canvas you can actually see what it pops up because obviously instructors want you to know if you're plagiarizing or not. So yeah, I definitely think that it is a practice but obviously instructors have to take it with a grain of salt because how reliable they are is definitely a question we have to ask ourselves. Right, and I'm trying to follow the chat but so right now at UBC we're not you know encouraging the use of any gen AI detectors for various reasons for ethical reasons in particular for people like me with an accent that may just be the victim of those different tools. So there's no position regarding the actual use of detectors. There's a position and actually it's on the academic integrity website. There's an FAQ section that speaks specifically to the use of detectors. So my next question is really about privacy considerations. So what are your privacy concerns when it comes to generative AI? How do you deal with this as a student in terms of biases, ethical implications? How should the university community deal with this? And Jacqueline, would you like to take this one on? Sure, so for privacy concerns regarding the use of gen AI. So basically I have two. So the first one is about sharing of personal information. So when assets saying all of these gen AI tools so basically users, they have to share the personal information in exchange for the service. So take chat GBTSN example when I was doing the registration, I'm using my Google account and also my mobile phone number. So technically I'm giving in two pieces of my personal information. And personally, I don't feel so comfortable sharing both my email and phone number to the tool but I couldn't do anything because that I mean it was the only choice. So the step that I took to kind of better protect myself is to use an email account that I don't normally use it for my personal and also professional communication so that my chat GBT account is not linked to the email account that I'm always using. And there's another incident that I did have a conversation with some of my classmates in class talking about the use of chat GBT and then I realized some of them they haven't tried chat GBT at all until this moment, which was quite surprising to me. So when I was asking like, oh, why didn't you try it? And then their response was like some of them are actually hesitant to try GNI tools because they are uncomfortable sharing their personal phone number with the tool. So I guess this leads to the discussion of equity to the use of this tools. So I think for this concern what the university community could do is to make sure that if chat GBT or other GNI tools are used or encouraged to be used in some courses like Gabrielle's case then I think like the instructors should ensure all students have equal access to these tools without being asked or forced to give in their personal information for the service. So that is one thing that I think the university community can do. And my second privacy concern is about how my data would be used. And I think some of, I saw some questions on the template addressing this as well. So technically when I read chat GBT use of content so the work content in chat GBT is defined as the input of the user and the output of the tool. So basically chat GBT clearly states that they may use a content to provide, maintain, develop and improve their surfaces. So although like there's another line saying that users can opt out if they don't want chat GBT to use their content to train their models. But it also means that in some cases the ability of chat GBT may be limited. So I guess this has some ethical implications and as a graduate student I respect copyright and also academic integrity like what we've discussed earlier. So I personally I wouldn't input lecture notes or any research papers into the tool because I do not have the permission from the author or the creator or the instructor to share their work with chat GBT. So, and also like I think there's one question since I'm mainly using chat GBT for proofreading for my work. So actually I don't feel so comfortable like having all my paper like input into chat GBT because that is like exposing myself too much to the tool. So basically what I do is usually if I'm working on a specific topic there are like names or any like identifiers I would just like leave it as like blank put in bracket and say name or like just put xxx just not to share too much information with chat GBT. And also because I think that even though like chat GBT has the term saying that oh they're going to delete. I mean, if you delete things on chat GBT you will be deleted like within 30 days but I'm not very confident or I'm not trusting that much. So I guess these are like my privacy concerns and what the university community can do more is like what William mentioned earlier that is to provide clear guidelines when it is appropriate to use JANAI tool. So it's not just about like oh you can use it on this assignment but like saying like can you like share the course notes with chat GBT or even like can you like how much I could be using it. So or another thing is like having more guides or maybe workshops to offer like students and allow them to understand more about like what are some privacy concerns and at the core implication of using these tools. Thank you, Jacqueline for this very comprehensive answer and I can say the digital tattoo program train you well. Manzura? Yeah, so I'll continue on, think one. It's similar to what Jacqueline was touching upon but for humanities we have like for me personally, beside the personal data that Jacqueline explained very well there is a problem with the data or sensitivity of the information especially if you're doing with research that has participants. So we were dealing for example recently I'm dealing with an anthropological research where we do interview people and it's very strictly we have like more than five, six pages of the data policy and how we're not allowed to use any content of even just written or any difference from the interview with the participants in the AI model. Even if we are just to generate response or hypothesis or anything because it's a very sensitive matter. Now for some programs or let's say some topics like let's say literature or something that is very much established facts or not facts I mean available why it might be easier to use chat you could just like as brainstorming or prompt or this type of things but when you are dealing with human-based or personalized data it's gonna it's going to become a very problematic nature and we are like prohibited to use that. The other problem is that there is also the bias that comes with the answers that comes up with the AI and it just improves when you're dealing with some political questions, some religious questions that could generate responses that might not be correct and you might not have the capacity especially as a student I might not have the capacity to evaluate it. It is a very specific question regarding political stand or religious stand and that might cause a problem it's kind of similar to the hallucination problem but in this case I might not detect it and I might build a whole research or anything that would be based on a wrong bias data or bias assumption. So in both cases these are the main problems with genetic AI. How should the university deal with this? I mean for at least I know for the privacy part it's each program has their own specific way to deal with it and sometimes within the program each course or instructor have their own way. For my research I know that there are like the same department has more than five, six profs doing their own research and they all have different models on how to deal with the what items are allowed, not allowed. And another way I'm hearing I'm not sure if it has been applied to UBC but some universities are trying to do their own like AI softwares inside the university so that will like limit the problem of data leaking outside but I'm not sure if UBC has done something at this point regarding that or not. Thank you, Manzara. Does anybody else wants to respond to that question or add anything to this? I'll add on to that. Just cause I saw a question on the tablet is about providing equity for AI tools since they haven't passed the PIA and then how can we give like equal access to these AI tools? And really I guess the answer would be what Manzara mentioned with these universities who have their own AI software that everyone has access to but also that way you keep your data protected inside the university. And I guess that would be the way moving forward or you have some sort of partnership to so everyone has access and then data remains in Canada. That's a big wish, William. Gabriel or Elena? Yeah, if I can address some of the data concerns especially just going off with Jacqueline has said I would also agree with Jacqueline not to trust most AI tools, not speaking about chat, but mostly some of the AI tools that we looked at were specifically to help students with Canvas quizzes. And so most of those tools when reviewing their terms and conditions they did say, yes, we will use your information responsibly, but we are also in partnership with this corporation, that corporation and your information can be distributed with them as needed for marketing purposes or whatnot. And so I think there's a lack of awareness for students in how their information is used because most of the time if a student is already looking for these types of tools they're not thinking anymore about their information they're thinking already about their grades or even like, yeah, it's mostly about their grades. So I think putting that into the guidelines and warning students as well would not also help them protect their information but also it would also discourage them from academically respond that as well. So that's what I'll add here. Thank you, Gabriel. Elena, anything? No, I think everyone covered this topic very well, quite well. Thank you so much. So I've got my next question about the value and reliability of the outputs that Gen AI provides. We've got a few questions on the padlet already so I'll make sure that we address those questions for this particular topic. So with the rise of AI that can complete assignments on any subject, any topic, what do you believe is the real value of your education? Elena? Awesome, I love this question. Obviously I'm now on my second master's degree. I mentioned that in the chat. So I love education. I'm here for it. It's a passion of mine but I find value in my education because I put forth the work of learning but it is that I'm actually interested in learning. And so I think if you're here at school to get a credential to get a degree to not really learn, sometimes there are people like that and that's okay. AI is just a tool that they can leverage to kind of push through to the end. And how do we get beyond that? I think it comes down to assessments. Will education continue to provide value if we do not change our assessments to align with the changes and development of AI? Probably not. There's not enough value in education if we don't adapt and change. And so I think what we're gonna be seeing over the next few years is a change in assessments to encourage creativity, to encourage critical thinking, to see how students actually apply their work and versatile in different ways rather than maybe regurgitating something. And so whether the value changes I think is dependent on how we adapt, how we move forward and then also how we continue to inspire students and give them a reason to want to be here as well, right? To be in the educational space. So just a quick follow-up question because you mentioned, you know, we need to adapt, we need to change. On the other hand, what would you like to preserve? That's a really good question. A really good way of, a really good example is perhaps instead of asking students to regurgitate summaries, for instance, like write a summary about this, ask them to apply the information that they have learned with maybe a critical concept and then their own experience. Chat GBT cannot make up their own experience. You cannot look at a person's paper and think, oh yes, they've experienced this hardship or so on and so forth. Like chat GBT at this point in time, at this point in time cannot make that up. Or perhaps it's a creative output that involves some kind of unique assembly of different tools, right? That actually demands that students actually apply these kind of interdisciplinary ideas. I think the method of assessment will change between each field and each industry for writing and, you know, digital humanities, the field that I'm particularly interested in. I think it's going to come down to their knowledge on the topic. Like a graduate student needs to be well researched. They need to be well educated in the topic that they're talking about. Chat GBT does not know as much as I do on Victorian literature and that is evident. And that is evident to the instructors that are also grading their work. So yeah, dependent, dependent on the field, but adaptation is really an underlying, the underlying motivator here. I hope that answered the question. It does. Thank you very much. Jacqueline? Yeah, so adding up to Hannah and I's point, so actually I'm pursuing my second master's degree here. So I'm super into education. I love learning. So I want to extend a little bit on like the critical thinking part. So yeah, I mean, like, yes, AI tools, they probably can complete a lot of tasks such as like brainstorming or summarizing. And they might be able to do it better than humans. However, like a graduate student, like to me, the purpose of education or like going to school is not just to complete academic tasks. So I do still see the value of education. And I think the real value of education is to acquire skills, to think creatively and also critically. And like one thing that just like Hannah and I mentioned that like those tools, they are like limitations. Like for example, at least for now, like Chachi BT or other AI tools, they kind of lack the ability to verify credibility and reliability of information. So the critical thinking skills that I acquire throughout my education journey can help me like understand the benefits and also the challenges of using these tools. And also like my education actually, it helps me to make informed decisions when I'm using these tools. So this is how I see the value of my education and the relationship with using AI tools. Thank you, Jacqueline. So we have a question in Padlet that it's somehow related to this. And I leave anybody to take this one on. So the idea of professor grading your work using AI, what's your gut reaction to this idea? And what's your hope and concern? Who would like to answer this one? I can get the ball rolling. My gut reaction, I mean, I don't think anybody really likes it when some kind of external source is assessing their work, regardless of whether or not you did cheat using Chachi BT or whether or not you didn't. I think it's still a little uncomfy knowing that there is an external tool that is reviewing your work and then assessing it and then providing some kind of like data metrics on it that might impact your grade or your outcome. So my gut, it's always a little icky. It was a little icky with turn it in to begin with and that's been around for years now. I beyond that more critically, I think that at the moment, these AI detection tools are not strong. From the information I have gathered on my own research, I have detected that the AI tools are, they don't function the way that we hoped they would. And there is no real concrete way for instructors to really be able to assess if a student is using AI. I think it's on the basis of understanding a student's learning process, their tone, their knowledge, how they function before, have they written pieces that are similar? Have they demonstrated that they've been developing those skills that they were demonstrating quite well in this particular assignment that might have flagged for AI? So I think you have to take it for a grain of salt and I know that somebody else has possibly mentioned that these tools do inherently are biased towards people of second language speakers or second language speakers because of the syntax that it particularly looks at when trying to assess the AI detection. So I don't know, it's one of those things where it might be causing more confusion than good. And you always have to take it with a grain of salt and anytime we have to take anything with a grain of salt, there's just so much more thinking and critical thinking involved. And sometimes for instructors, that is a lot to have to be always cognizant of. Thank you, Elena. And I was kind of also interested more into the use of AI to grade not necessary to detect because if you look on the academic integrity website at UBC, we discourage instructors of using any of those tools and turn it in in particular. So let's say the prof don't have the ability of verifying if whether or not the output has been generated by some AI, but the idea of you being graded by AI, so how would you feel about this? I'll give my opinion on this one. So I mean, if you're just grading it, put it in, you're not even looking at the assignments, then that's bad, don't do that. But let's say you're actually using it to help you. So you're actually looking over what it's graded. In my courses at least, I don't even get feedback anyways. It's just a rubric, correct, correct, wrong, wrong, and then you just check the answer key. So there's no feedback anyways, but if you have an AI tool and maybe with that, you can start giving better feedback. It's like, hey, this is where I saw you went just wrong, this is where you could have done better, then great, because that wasn't there before. So if it allows you to actually give better feedback instead of just grading, then I'm all for it, as long as you're still actually checking the output. Thanks William, and why don't you have currently any feedback? Is that like the way it is designed? The rubric or it's just- Yeah, you just- At least in math and science, you just get it right or you get it wrong. You just, on campus, you get the grade, and then some of the course is supposed to be an ASCII, just look at the answer key or, hey, go to office hours, because we can't post the solutions. Gabriel, maybe? Or Manzura? I was just going to follow up with William's one, because this is interesting that on the contrary, in the humanities, we are encouraged to give feedback as a way of going beyond the idea of just normal rubric based, like just marking, because, you know, humanities is mostly about where it went wrong, your answer or what you did well or not do well. So I can't speak of everyone in the department, but I know that at least a good number of our profs, and including the prof I'm studying with and teaching with, all of them have this policy that no AI grading at all, at least in our department. And this, and part of this is also that every single assignment we do, like as a TA, we have to do, it'll have to be handed back with various specific personalized feedback, even just one true sentence that shows that you as instructor has interacted with the answer. So this is one way of breaking the idea of maybe it was graded by AI. So this is very much contrary to maybe the other disciplines like that has the opposite, the feedback is coming from the AI, which is not the case for us. Thank you very much. Anything else from anybody? We sometimes have like provocative questions on the padlet, so I'm making sure that I get those questions to kind of push you a little bit. No other comments on this one? All right, so now let's talk about the benefits and challenges. So to you, what are some of the benefits and challenges of using JNAI in your learning? So you've mentioned a few already. How do you evaluate the quality and reliability of JNAI outputs in your learning? And I'm referring to this info saying, we always assume JNAI is like 100% confident, but only 70% accurate. So keeping this in mind, and what is JNAI to you? Is that a tutor, a coach, trainer? Gabrielle also already mentioned like it's kind of a study body, but I'm interested in this of question as well. So we have three components to that question. So I will let maybe William, Jacqueline, Benzara, Gabrielle to take that on. I can go first. I always think of chat, not just chat GPD, just chat GPD, but generally JNAI is more of a classmate than as a instructor or tutor, because when you discuss your ideas with a classmate, they might give you some new ways to deal with the question problems at all, but you don't really trust your classmate as the way you would trust or like believing the information given by the instructor. So take it with the way that this is a new way to approach and then continue based on that. So for me as the benefits I face with JNAI, it really helps to shape your thoughts when you need to say it out loud and come up with some ideas that you might not have always the access to have someone to rest into your rant and come up with ideas. So I keep just striking the prompt after prompt until like, okay, this is what I was looking for. This sounds much better. I will start working from this point. But the challenge is that AI cannot really deal other than very superficial information. So if I'm looking for something very specific in my field, it will often not understand, will give me some very shallow answers. And then I'll have to say, I might do it by myself alone. It's not going there. Like a very simple example, if I give something with postcolonial, it might give me some very normal answers. But then when I look specifically when for example, trauma in postcolonial, then they will not be able to answer that because again, their access to information is very limited. And of course they're just generating. So this is both a benefit challenge in a way and to evaluate the quality. One of the things I always would go back to the resources they're mentioning. It hellsends a lot. It doesn't know how to say, I don't know. It will just keep generating answers, keep quoting from people that do not exist at all. So even if you take an ID and you find interesting, I always go to UBC library and other sources to make sure that this thing actually exists. And there are oftentimes service advice, they're misrepresenting or misinterpreting some of the famous quotes and then putting it in a very different way that is again, very problematic. And that's why I think there is a bias in how they might quote someone who is very much opposite to stand to prove their point that is in a very different matter. So now this is also part of the reason why AI grading is not integrated in some sort of humanities because we might evaluate how much you're arguing with your point whether right or wrong. So we don't really have right and wrong, but more like, is your argument making sense to me whether I agree with that or not? So AI of course can not do that. AI will just go and fact check with what they have available. And so no, this is wrong. So that is in my, like in where, in the summary what I would say in AI, is the classmate think with it, but don't trust it completely. So if I may, Manzora, so let's, you know, we're talking about today, you know, gen AI is a few months old, which is quite scary when you think about this. It's just so recent. And yet we talk about it so much. If I ask you the same question in five years or 10 years from now, would you have the same answer regarding the reliability of those tools and if they were to improve dramatically, would you have the same answer? I think what will improve with it also are our critical thinking abilities. Like part of the questions we are questioning now in the SDS and humanities studies, including all the scientific knowledge is that it's still like a long question of trust, truth and post-truth and this type of ideas, but we are evolving with the new technologies. So in the next five, 10 years, of course the AI will change completely and my user of people probably change, but also the way I will verify the facts, I will take the facts will change. So I think the notion of what is right and wrong or more like what is trustable or not will change at that time too. It's really evolving very fast. And maybe right now we're seeing it as a crisis because AI is evolving way faster and we are evolving, but I have noticed that there is also a change in how teaching methods has changed. We are moving past the fact that teaching as learning to the fact that teaching as critical thinking in a way. So we'll probably see that change too coming up with that. So yeah, my position will change, but also my way of thinking or dealing with that will change too. Thank you, Mamdero. William Jacqueline Gabriel who wants to take this one. I can go first. So yeah, I think like Mr. Romero made a really good point about AI and then for me, like what JAN AI to me is kind of more like a friend or maybe an assistant to some quick quakes of like simple questions, but not well researched ones. Cause like what we have been talking about like how like hallucination or like misinformation might be generated by AI. So I think the benefits of using JAN AI in my learning is that I think I work more efficiently since I'm using it mainly for crew reading and then bring stormings and then now I can do it in a really timely manner and then just build on something based on like some inspiration that I got from the tool and how I am like evaluated quality and reliability of JAN AI outputs is that I always check the output before I use them as part of my work, especially when I see numbers or specific examples. So one incident that I can share is that I once asked chat GBT about how many public libraries there are in Canada and then it gave me a number, a specific number saying that according to the 2019 report there are like how many libraries or public libraries are there in Canada. And then I then I always ask a full of question and then I was like, what's the name of the report? Can you tell me? And then the reply from chat GBT was interesting. It says, I don't have the link to the Excel report but you can go find statistics on a website. So that moment I know that, okay, so it might not be accurate that number. So I know that I need to do my own research. I cannot rely on that. So that is like how one way that I evaluate the output that is like I always ask for a lot of questions to make sure that it's true or at least close to the truth. And also I will say like for January output that as a library school students I really see like how these tools might be providing like not very accurate information to students since I've been staffing library chat reference to business. And then we have received queries from students asking about like, oh, can you help me look for a resource that is something? And then when we try help with this issue, so we found that it's not existing. And then when we follow up with students and then the student was like, oh, it's actually suggested by chat GBT. So basically in this case we can know that oh, chat GBT actually made up something. So it's like what my sura said, chat GBT. It just like, I tried to like answer all of your question but not saying I do not know something. So I think it's really important to like students, we use it when we are using it, we should be like paying attention to the quality and also the reliability of the outputs. Thank you, Jackie. William or Gabriel, from your perspective as a student using Gen AI? Yeah, so for me, Gen AI is a tutor that's personalized available 24 seven. So for the benefits and challenges of using Gen AI in learning, at least for learning, if I had to say the number one predictor of how well a student will succeed in your course or how well they will learn the material, it just comes down to effort. So these are the benefits and challenges of Gen AI. If using Gen AI in a way leads to higher effort, maybe because of the time efficiency or the personalization, students are putting more effort in, then I think that's beneficial. But if you're using Gen AI in a way to not learn, to just take the answers, not think critically, you're putting in less effort, then that's a detriment. So in terms of evaluating your quality and reliability, this is where you actually need to know what you're doing. So it also goes into the previous slide about the value of our education. I personally believe our education is more important than ever because you can fall into two camps. Either know what Gen AI is saying and you can actually evaluate the output or you don't. So then you just have to take it at face value. You put an output and then Gen AI gives you an answer. You can't think critically enough to evaluate whether it's a good enough output. And then that's when we're in trouble. So I believe education is more important than ever just because you need to be able to evaluate these outputs. And if you can't, then that's not something you should do. But if you don't know any better, then that's the only choice you have pretty much. So the best way to evaluate it is just to look at the output it gives. And then if it's giving examples, for example, you can actually cross check it. If it's giving code or you can run the code. And if you know what you're doing, then you can tell immediately if the code is good. If it's doing math, say multiply a three by three matrix for me, you can look at the steps it gives, like say, hey, give me the steps you took to give this output. And then you can cross check those steps. And oftentimes it's wrong, but I kind of like that because then it's less tempting to use standard of AI. It's just so tempting right now because we're students, there's so many courses, so little time. If I use standard of AI right here, maybe I'll save some time. I could just get the answer, but then I think that's detrimental to learning because lower effort, but it's just so tempting. When Gen AI gets better, then that tempt would be even harder to resist, I feel. Yeah. So William, I don't know if we'll answer that question in the padlet, but because you shared our strategies that you use to kind of cross check the information, the example that the Gen AI has produced. Can you estimate how much time it costs you to verify this? Because you say the value for these would be, if students spend more time for their learning using those tools, then it's beneficial. But in your experience, like how much more time have you spent doing this sort of cross check of the output generated by AI? Yeah, I would consider myself to, I actually put the effort into learning the material so I can pretty quickly tell if it's right or it's wrong. But that's also based on the way I'm using it. If I was using more of like immediate source where I didn't put the effort into, I wouldn't try to answer a question, I didn't really put minutes into the question and I just put a straight in the chat to be, then I don't think I would be able to quickly evaluate it. But since I really put the time in and the effort, then it's pretty quick, it's a couple of seconds in. Yeah, it's not too much effort. Okay, and so let's say, you know, you need to do further exploration, would you be using a different AI, complete different AI that may function a little bit differently? Would that be something that you would consider doing? No, I would think for myself first, because again, I want to think critically and then be better than the AI in certain things. I don't want to be like a person where like I'm not good enough or Genevieve can surpass me, that's like my fear. So I need to do a thing critically and then put the effort into myself before I check with Genevieve AI and then that way I can actually learn something. And maybe a different question, just out of curiosity. Let's say you talk to your classmates and they say, oh, you know, I've used this different system and I think I get much better response. I assume, you know, you would be exploring that different system or different tool based on what your classmates would say. I mean, as a CS student, I'm always interested in seeing the latest and greatest technologies. Like I have GPT-4, I play around the GPT-4 API. So I think that I do have the best of the models. So if they did say, hey, I tried this and this works great. Like I would try it, like, hey, I tried to generate some study questions, like, okay, and it worked well for them. I'll try it for myself. But in terms of stronger AI, I already have the strongest. So I can't answer that. All right, thanks. Gabrielle, are you able to provide some insight on that question? Yes, of course. Can you guys hear me well? Okay, thank you. Sorry, I just had some technical difficulties a while ago, but I think on top of the amazing answers that the other panelists has already given, I can speak from my own experience that I would say AI is only, or gen AI is only beneficial, only if you know how it works. And it really depends on your level of mastery of the subject matter. So what does this mean? So just from the example that I provided earlier in our introduction, when our professor actually provided a really terrible research paper, just to go deeper into details, for those who are more familiar in evaluating scientific research papers, p-values that are less than 0.05, that just means that the findings that they saw were insignificant. However, when using Humana AI, what ended up happening was that Humana AI actually just summarized the talking points for the future. So there's a discussion section in the research paper that discusses moving forward with these findings, what does it say, what does it mean to our field? And so that was clearly a misinterpretation. Any good professor and any good student would know that this research paper should not be trusted. However, just like one of our panelists brought up, most of AI doesn't say what they don't know, it always tries to answer your question. And so it did lead to actually wrong answers. And so for students who had lower level of mastery in the course or in terms of reading research papers, which unfortunately were my friends, some of my friends, they actually believed in AI 100% instead of evaluating its accuracy and ended up getting their grades deducted because they put in the wrong answer. But also what was really useful in this class was that before we went into like dive deeper into research papers, what our professor did is actually, she taught us how to read research papers, what to look out for, how to read the discussion and how to evaluate critically what even the authors were saying. And so in doing such as well, because our professor also encouraged the use of the Humata AI, we were taught how to critically evaluate its responses and so adjust from that as well using our knowledge on how to interpret research papers and also our base knowledge in like cognitive neuroscience, we were able to like use it as a tool, as a helpful tool rather than something that might be harmful for us. So if I were to make an analogy, AI can, it's like a kitchen knife. If you give it to a chef, they will know how to use it. They can cook you like this amazing dish. But if you give it to a three-year-old, they, well, they might be able to cook up something but they can also hurt themselves when using it. So yeah, that's where I will leave my answer for this question. Thank you, Gabrielle. I have a few more questions in the padlet so I'll make sure we allocate some time to discuss those. And as Lukas mentioned, we love the kitchen knife analogy. I would think more of a Swiss knife but that's not exactly the same reference you're making here. There was actually a question about the study body that you just referred to a little bit earlier. And what differences do the students think they are in using AI compared to asking the, let's say an in-person resource that might be available on campus or even a TA. So now that you have gen AI that is available to help you out as a study body. So do you think there's a change in your behavior in the way you may be asking for help or more? Can you speak a little bit to that? Yeah, I think I can speak to that. I think for students who don't know how to use it again, who don't have that sufficient level of mastery, it will definitely take away from them the need or want to go to the instructor or the TA instead of asking, at least from my own experiences observing the friends around me. However, for those who do know how to use it and who are taught how to use it, they will actually go to the TA even more to verify what the output is. And so I think when it comes to, if you want to still encourage students, and which I would also like to see happen in the UBC community as well, is really to make the students aware of what AI can do and what it cannot do. And in doing so, it would encourage them to come to the TA and say, hey, I use AI, I got this answer, is this correct? Or go to the professor, did the AI misinterpret something? Is this a hallucination? Cause I think one thing that we can shine the light to students is that we may not know enough to evaluate the output that AI gives us. And in doing so, that will foster even, I would say, better interaction between professors and students and might even encourage students to come to office hours or TA's even more often. So yeah, I hope that answers the question. Sure, thank you. So that's a good point that you're making about, that may foster the interaction between the instructor and the students. So I would say in the context of a prof allowing the use of JNAI, but if let's say an instructor is not allowing you to use the JNAI, but you still have question based on the use of JNAI, like I would be curious to see the reaction of the prof saying, well, you're not supposed to. And because what William and you were also saying is that, as long as it's beneficial to the learning experience that you invest more time in looking for the right response. So that's what it's supposed to be. So I would be curious to see that relationship based on whether or not you're allowed and not then as just a reflection of my own. And I've got one more question if I don't say anything wrong and that might be the last one. So about originality in learning. So as JNAI becomes more sophisticated and it will become a lot more sophisticated, does original work still exist for students? Or do you think we are witnessing the end of authenticity in academia? Elena and Manzura, what do you have to say about this? So I like that original work is in quotes because what really is original work? AI uses information and words off of the internet that people have already said. People have put this information into the ether already. It has come from somewhere. AI is not making anything up. It's not creating new ideas, doing new research or composing new theories. It's just taking information that has already existed and compiling it in a way that makes sense to us. So does original work still exist? Of course it does because we have to continue to do more research. We have to continue to have more creative outputs. We have to find ways to connect our personal experiences to the information that we're producing because the author is almost as important as the work that they produce. You can't separate both from either. We want to share our personal experiences. We want to add about the lives that we have in this world to the work that we're actually producing. And so that's what makes it original. It's our experiences. It's the way that we view the world, the perspective, the tone, the way that our experiences have shaped the way that we think about particular things that topics, political ideas, that AI might view as objective, quote unquote. So yes, it does originality does still exist and will continue to exist. But like I said, academia needs to keep up. We have to be asking for students to be original. If you are asking students to regurgitate the same thing that the gazillion of other students have also said, is it really that original? Are we asking students to think critically? If we're asking them to do something, the million of other people have also done. Yes, we need foundational skills. We need foundational knowledge, but there's also an extent to that as well. So that's my two cents. I think it's a bit of two-fold. We need, it will continue to exist, but we need to ask people to be original. We need to ask them to think critically and to put forth that effort. Thank you, Elina. I can see the, I can feel the passion in your response when you answer Manzula. Actually, this question has driven in my English department, profs having different approaches to the AI in their class. So for some profs, the originality counts in the way you're producing the idea. It's not just how you're writing it, but also the idea expression itself is an original work. But first, on other profs, it has the idea that there is no problem using I to brainstorm to get ideas and then extend on that because there is no such thing as that very original idea. Whether you're using it with AI or using your own research, you're actually taking inspiration from others works, building up on others works. So an AI and I'm quoting an ideal AI that could actually, let's say, give you very real information, should be able to just keep summarizing or shortening the time that would have taken you to go through books and research works and build on that. So this is where the standpoint where you think for some people there is no such thing as an original idea to work to begin with and every work is an extension for the previous one. And AI is just one of the steps that you will take and continue building up on that. For other people know the idea of where you originating your idea from is the question of origin also. Using AI takes away this originality even if you're adding to that. I can't really say what I stand as a student because it really depends on where to go, but all I can say for sure for most people that using AI on the brochure is not an original work. It really depends on what you're going with that. But whether to use AI at the beginning of the work, that is the question here. And that I think each has their own view on that. Thank you, Mazura. So we're getting to the, almost to the end. So I just wanna make sure we address the questions that we have in the padlets. We've addressed a few already. There are a few more. And I've been following up the chat and thank you to Lucas, Richard as well, to Chai Min and Anza. I've seen responses also from Steven as well. So thank you, Steven. So do you have any final thoughts that you'd like to share based on the conversations we've had all together today regarding the use of JNAI? So I'll let you, I'll let anybody to answer here. I'll respond. I'll just say a small point here. That JNAI is very new and it seems hard to think about like how does JNAI impact this? How does this change this? And I feel like if we just remove the JNAI part from the statement, then we can try to draw on our existing knowledge of how the, like we have a lot of knowledge built up throughout the years, throughout teaching. So say the question is how does JNAI, like why would students use JNAI to cheat in my course? Well, then remove JNAI and ask question how like why would students cheat in your course? And then you can start applying all the knowledge you already know. And then I think that this simplifies the process or if you can draw a comparison and then compare that with how JNAI is different or similar, then you can start getting a good like foothold of how to think further and evaluate JNAI and how it changes things. Thank you William. And maybe just based on what you just answered. So how do you think we should do what the profs should do to kind of create an experience so that the students will learn and develop skills with the use of JNAI. But this is not gonna go anywhere we have only a few months of experience with those. But in five or 10 years from now, they will be quite different. So what kind of experience would you like to see as a student in your courses in the future? So then students are being used to using JNAI, develop new skills for researching or for reflecting on content being more critical. What do you think? Yeah, I would say in general, if you try to implement things that lead to higher effort. So JNAI make it more interesting because again, why would students put more effort into your course? If your course content is interesting or it's easy to get into then students are more likely to put higher effort into your course and engage more. And let's say you're designing an assignment. If all your questions are ridiculously difficult, then well, you don't leave many options. However, if you start the assignment with some easy questions, some fundamental ones that give a strong foundation, then this would be the time where they can use JNAI to check their learning using all these throwaway questions. And then when you get to the actually interesting questions, then they have the foundation, they don't need to use JNAI at that point or they don't need to just like T and get the answer. So if you make your assignments and your course content, like digestible and you build up, then you can kind of like encourage and discourage the use of JNAI. Thank you. Yep, Elena. Oh, I cannot onto that. We talked a lot about critical thinking skills throughout this panel and that it's really important to have critical thinking skills to use generative AI, but you really need to gain the critical thinking skills from actually working through problems and assessing and really not copping out and just putting in answers or getting answers from alternative sources, generative AI or not. And I think what instructors really have to do now more than ever is convince students that the skills they are gaining in their class will serve them intrinsically as a person in this world rather than someone who's just trying to get a degree because we do need critical thinking skills to move through this world, to have conversations with our loved ones, with our partners, to understand political topics, to understand the ways in which the structures that change how we live actually implicate our decisions and our lives. And so we do need critical thinking skills and knowledge for life, life broadly. And I think instructors now more than ever, unfortunately, really have to market that to students and give them a reason why. Why do you need these skills? Why do you need to think about these skills outside of the context of these walls, of these walls of this classroom? Why is this knowledge valuable for you to actually know as you move through this life because four years of undergrad, whatever, post-grad, that's maybe what, seven years? Seven years is a fraction of the time that you will spend on this earth and you will need these skills. And we're just kind of losing that kind of passion, I think, or not necessarily passion, but reasoning and rationale because now we have to be much more critical of the why. Why do we need this? Now students have to be really propelling themselves asking themselves to be critical. And so maybe even implementing more real-world implications or asking students to engage with these ideas on a more intimate level so that students feel connected and feel motivated to continue their studies in a way that doesn't revolve around cheating or coughing up because it's too difficult because they actually want to learn, they actually want to be here. Thank you, Elena. But that's a good point. I think it's interesting because we have only a few months of experience, I guess, and we are more into this mode of reaction instead of being, you know, anticipating what we should be doing. And I guess time will tell, you know, in terms of skills that would be needed in a particular domain or discipline, you know, once someone graduates. So it'll be interesting to see what comes out from this, in the next five or 10 years, seven years, as you mentioned. We have a question from Christina in the chat. I'd like to ask you guys. If an instructor suspects the students use JNAI in a way that is not allowed for their course, how do you think they should best address this with the students? So that's kind of back to what Gabriel was saying, the sort of interaction with the prof, fostering the interaction, the collaboration, sort of, you know, asking for questions but in the case, the prof is not necessarily allowing or allowing such a use of JNAI. So who would like to respond to that question from Christina? Yeah, one of my courses, they suspect people have used JNAI, they had a slide in the class, they said, okay, this person cited, this person did not, we suspect them of using JNAI. Next time you get zero, but this time it's okay. And you basically just scared everyone into using it. So I'm assuming that you allow citation but if you just ban chat, CBT or gender of AI, then well, you kind of put it on yourself to start policing things. So if it's citation, then that's an option you can try but if it's just banning, then yeah, it's making it harder on yourself. Let's say hypothetically you have a course and then the syllabus is a final exam worth 100% of the grade. Then technically, everyone quickly realizes that if you cheat on anything else, then it really does not matter because you're gonna fail the course anyways. So if that's like the dynamic, you could tell your student that, hey, you cheated on this assignment, but it's probably not gonna lead you to success in the midterm and the final where things are locked down and then, but that depends on your course. So yeah. Thank you William. I've got two last questions that I think we should discuss. One is, how does JNAI perpetuate the coloniality of being, knowing and doing? And how do you think we should do to decolonize JNAI? So that's not an easy question, but I'm just wondering if any one of you would like to answer this one. I can answer this. I do like this question because AI perpetuates the idea that we are receptacles of knowledge. Knowledge comes to us and we absorb it and it can come through us from any facet and we're just inhaling it. But rather knowledge is shared and when we think about how far we've come to learn, we've come to learn from other people, we've come to learn from their experiences, from chatting, from learning about their backgrounds and who they are. And I think one of the really important ways of Indigenous ways of knowing is having conversations, right? Listening to other people, engaging in other people's experiences without thinking about documentation of essays or that kind of thing. We're talking about being in the world. And so what I think is really interesting is that Gen AI doesn't necessarily include that kind of or that kind of facet of learning and how we decolonize Gen AI, that I think is a much broader question that I cannot answer at this time, but I would be interested to hear of anyone else. Oh, someone has put in an article. Wonderful, I will take a look at that, but if somebody else has another comment, I think that would be appreciated. Thank you. Does anyone wants to answer this question in particular? It's not about how to decolonize, but I would just add on the first part, one of the biggest problem with the AI is that the recent knowledge or who is controlling the knowledge because who is owning the AI company and where is this knowledge coming from and what is the presumed assumption of what we are calling fact? These are like some of the main problems with the AI, HRGP and anything that has to do with the generative because where are you generating the data from? And of course it's replicating what we're calling the power structure that was in decolonization and just replicating it in the digitalized form. So there is also digital colonization now. Now obviously it's a broad field and actually it's more on the rise right now with the evolution of AI. So ours is very like a broad field to look at. There is not really an answer I can give at this moment, but yes, science and technology studies has been going really good on this approach because when we're talking about, I also talked about someone producing knowledge and someone feeding with the knowledge and who is the producer matters a lot. Thank you. And there is last one, last one, last question, sorry. Do you feel in general, you learn more when using AI? And I will just end on that question because we'll have the philosopher corner at 11 and we need to make sure that everybody has the time to rest and get ready for the next session, but if someone wants to take that one on, that would be great, thanks. Yeah, maybe I'll say the benefits of AI and the fear of it are both exaggerated. It's not really changing the way it's anticipated for both like in the good or bad way. Am I benefiting from it? Yes, but maybe mostly 25% to maybe the best rate 30% but not more than that. It's not really as it's advertised. Also, it's not really as worse as it's feared or like warned upon. So yeah, it's fastening the process of searching, but it's also in creating new methods of looking and verifying. So it's just another model of learning. Thank you, Manzara. Maybe Gabrielle or William or Jacqueline on this one and then we will finish. I can share. So like in general, I think like when I'm playing with all these Gen AI tools, I think I'm learning more things. I mean, of course I have not been using it because I do not trust the information that it gives. I do not trust like fully, but I think I'm like applying skills to how to like, for example, like writing proms and also like how to like make good use of these tools to kind of like facilitate the work that I'm working on. So I guess like in this scope of learning, I would say I am learning more, a quite new skills I would say in when I'm using all these tools. Thank you Jacqueline. One last word from Gabrielle or William. Yeah, for me, I just try to use it for learning or understanding based questions and not those questions that I have to deal with like the application of the knowledge because though at that point I should be, I should already understood the material and be able to do that myself. So that's how I try to, I guess maximize my learning and just making sure that I actually understand the material because the midterm and the final is not where I want to find out that I don't actually understand what I'm doing. Gabrielle, maybe a last reference to the kitchen knife. Yeah, of course. So I think my final thoughts is again, I'll just repeat the point where AI is just as good as your understanding on how the AI works and on the subject matter. So I'd say most of the time I agree with Manzura. Students would opt to use it for efficiency because just to maybe tell more about the culture in UBC is that everyone always has like two or three things on top of courses, whether it's volunteering club and everything. And so I think AI will always be a beneficial tool for students in terms of efficiency. But I think with that, because not everyone understands how AI works and like what it can do. And more importantly, what it cannot do. I think it is much more important for us to not be scared of AI, but to teach ourselves first how to use it so that we can teach the students how to use it more. Cause I think at this point it's kind of hard to police students to not use AI. So I'll just leave it at that. Thank you and thank you everybody for for your outstanding contribution today. I really, really want to thank all the panelists for this time that you spend with us today. I'm sorry for finishing a little bit late. But there's a next session at 11, the Philosopher Corner. So I will leave it to Lukas. But again, thank you so much for your participation. Bye.