 Welcome everyone, I felt that I was frozen for a little bit and I tried to keep my big smile on while I was sort of frozen. Thank you. This is, this is really exciting that we have this faculty one table. I had a chance to meet with them, but I will let me acknowledge our lens again. Well, Lucas mentioned about the indigenous knowledge has been being scripted without acknowledging them. This is a colonial practice. I would like to remind that in the indigenous protocol and AI position paper that was published in 2019. The indigenous community that came around that joined together in Hawaii. They put a position papers out where they talk about when. That we should use generative AI to think about when we are using it, we need to think about how what is the impact on our land. On our nature. It's not just using AI to help us help us. Meaning the human being. What human being exists because we have the nature around us. So, think about the environmental impact that generative has around us. And also think about how we can use generative AI. To make our world a better place. That's right on my border, as I'm saying it, I'm scared. But I think that's as I mentioned earlier that we are in the. We are in this in the education space. This is our job educator needs to think about what education will look like in 15 years, maybe in 2 years. What is the classroom going to look like? And I'm just very excited that we have gathered together a amazing. group of panelists. We have salutes, and I'm going to ask my colleague to help me spotlight them and I will turn off the slide back so you can see who they are. Say hello to them. We have salutes. I'm from Boston, the whole week. Carly. And I will be the moderator, but I told them that they do not need to follow my structure. We have a little group as you may notice, but they do not need to follow my structure. And this is the structure here. We will, we initially, and as you notice on the events registration. Description we thought about for one table describe questions. Assessment in the imagines. What is it going to look like? How is it going to trans transform our assessment? How can we balance fairness and innovation? Then we will move on the integration into teaching practice. How is it going to affect our teaching practice? And how can we write personalized feedback? And at the same time, we don't want to forget about creativity. How can we continue to foster students creativity and the independent thinking? Our last question is a little big when we met last week, we go, well, this is a huge big question, but we know it's going to come up. So we will, we hope that the structure will finish in about 30 minutes. And then we really want to open the four for lots of questions. Discussion. We welcome you to charm and ask our questions. So we set up a slide or I can't and use the side of when you want to submit the questions for our panelists. But feel free to use the chat to chat with each other. We really busy and I'm a little worried that I may not be able to follow. But I will at the end, we will save the chat. We will have a time chance to look at them later. But if you really want questions to our panelists, please put it in the slide or I can't. And just before we start, we really want to know that we are not the expert in large language model. But me, the panelists know a little bit about what is happening in the discipline that Alissa mentioned earlier that we know what's happening in the field. So let's say she doesn't know much about AI, but she knows about teaching. Just like many of us do a science writing has converge on the acknowledgement of AI use, but we still cannot list AI as offer. So this may tell us a little bit more about that. In her classroom so this students can use AI to develop ideas and writing. Now look. Now what is a faculty member from computer science. And there's just so much that is happening in his field, because he's studied social and human sides of the interaction between human AI human and AI. So again, he's going to let us know more during his time. Alima is a faculty members from the master of education technology program. And learning analytics helps using machine learning algorithm can help students modern the application in the field. And it's not really new. And something pops up on my screen. And how is going to tell us more what is happening in education. Carly is a session of lecture instructor in the sort of school of business. And she really is she's really curious to know the mindset of teachers in the education system. And she's telling us what she's thinking to. And William. And has been using it to assist in writing and thinking about using protein variants. And how to cure disease causing mutations and all that. And I, when I see protein, I have to say that I get really excited myself because that's the science in the scientists in me that I also used to study protein myself. So I wanted to know the latest now that I've left the field of science protein structure. I would love to hear more from him. So again, silo.com for questions. And this is the code. We will continue to have the code at the corner of the slide throughout the session. So I am just going to just want to let everyone know the panelists met briefly last week. And during our discussions, we, there's a few things that we would like to share. That's not on the event description. We really want to continue to talk about generative AI, the promises, some of the risk, the car wheels that we can set in our classroom in our education environment. How does the average of generative AI mean to the philosophy of education? We want to talk about that, but we might also want to join the philosophers corner tomorrow or later during this symposium. Some of the practical guys that we would like to share to about the use in the content creation or the creations and in our classroom practice and something about teaching and assessments. So we talk a lot last week and I asked my panelists, please keep the same energy that we had last week and open to you. And these are the four questions. And we will start with the first question and then I will turn off my slideshare. First questions. And I would like to invite Salis and Holly Matt to response to this. How does generative AI transform skills and knowledge assessment while balancing fairness and innovation? Salis, Holly Matt over to you. Hi, thanks everyone. I guess I'll go first for a few minutes. So I'm in biology and the short answer to this I think for me is that it hasn't changed much at all. I am here. Mostly I think because I was a super early adopter of AI in my classroom as early as sort of first week of January, when, when second semester sort of started. My class, I mean in biology in particular we're mostly out kind of stomping around in streams and measuring stuff. And I think about the point earlier where is that going to be in 15 years probably will still be out stomping around measuring stuff in the stream. I do a lot of still so last term and this term. I do quite a bit of introductory use with my students. I did playing with AI. It was mostly just chat GPT at the time. Last, last December, almost, almost a year ago. There was immediately a big sort of rift, I think at least in my, in my field of, of people who were really afraid of what, what AI might do and implementing all sorts of screening mechanisms, detectors, that sort of thing. There were a few people like me that were generally just curious about what, what this tool could do for education and could do in our classrooms. And I was a little bit unsure of how to navigate that I think I use can, can you show that the next slide Judy. One of the things that that I used very early on that was sort of a permission, I think, for me was the Sarah Eaton's work on tenants of post plagiarism she does a lot of thinking and, and dealing with what writing in particular is going is going to look with the advent of AI. And for me this sort of gave me permission to sort of launch ahead with my students and not really worry about all these detection things and things that that that some of my colleagues were implementing. And so I did I just jumped in and spent some time at that point only one, believe it or not only one of my 100 students had actually used chat GPT before and most of them hadn't and had no idea what was what was happening. So we, we worked together for a while sort of building some guidelines. And then the other slide I wanted to show is their comments at the end of the term so this was very unscientific commentary. What happened was I was invited to give my first talk on using AI in the classroom, right as the term was ending and it hadn't really I hadn't really considered getting feedback from students so I literally just passed out paper towels for the people that were there and said, tell me what you thought of using this. And these were some of the comments that stood out for me that I think have really impacted the way I think about using chat GPT for my students. Mostly they're writing, although now some of them are using it to generate ideas to me to be honest science writing is not super creative usually the creative part though is the generating of ideas for my students which it does an okay job with kind of. So, a couple of ones that I wanted to point out that I think were really accurate and really true is number four neurodivergent students realizing that this kind of levels the playing field and in some ways which I think was a good, a good take away from me. In this group of students about 30% of them chose not to use AI at all in their writing and I think about that percentage still holds true this term for two reasons one is that they say they want to learn to write on their own and this isn't very helpful and the second reason is that they feel like it's so much work babysitting and the redundancy that they just feel like it's not worth the time. So, probably most of my students even though they have free access to it aren't actually using it, using it much anyway. So I'll pass it over to Alima. Thank you. Thank you Celeste. I think that I agree with you and that I think we should leave with curiosity. In the field of education, there was an uproar there's a lot of fear and I think that fear is warranted. These are new tools, they're untested tools. While there's a lot of promise there are those drawbacks and I think that you mentioned your neurodiverse student. That's one of them in making sure that these tools are accessible to our students that we teach them how to use them and that we make sure that we're not making learning inaccessible by really putting weight on excuse me pain is inaccessible by shutting things down and focusing on the sorry about that I have a long go off focusing on the cheating on academic integrity. I think that as time goes on, we're going to be using these tools quite often for cognitive offloading but we have to teach our students to use them critically. Yeah, I feel honestly like, at least in my field it's almost a disservice to not expose my students in to what it what I can do particularly for their writing because it is becoming more and more of an acceptable thing in science publication so I agree with you that I think that approaching it with curiosity and also kind of a responsibility to be honest I feel I find it interesting to talk with our students, creatively about it to decide okay, how do we want to use this in this course do we want to use it as a coach. You know so we use it for brainstorming or use it for pre rights and things like that or you know is it okay to use it as a crutch for grammar or organizations of things are kind of somewhere in between. What did what was the input from your students after they had had the opportunity to use it in the course. Yeah, I don't some some definitely used it a lot and still do and some not so much and are just kind of not not interested. There's a question in the in the chat about citing chat GPT so initially at the start of term there was no guidelines and science at all. This is last term, and about March, some of the big science journals nature and science came out with some guidelines on basically stating people had submitted articles citing chat GPT as an author and they basically said you can't do that they're not accepting it as authorship, you can. Acknowledge in the acknowledgement section that pieces of this were written by AI or AI was used or something like that and so that's what we implemented in my in my course my students do publish I should probably. State that so my students, all their work is public facing in one way or another so none of it is constrained just to our class. So it is, it is an honest consideration of how they're going to acknowledge that they use chat GP and we are we are following sort of the guidelines of what the big science journals do so they can't they don't list. Whatever I they're using as authors but they do. They do acknowledge that they've used it in the work. Thank you very much. And so let's, and I'm going to move on to the next question. Are you ready. So this is the questions that we put on our description. Integrating into teaching. How may we integrate into our teaching practice from creating cases to providing personalized feedback over to you. Thank you. I am answering this question at the capacity of researcher. Studying how AI, especially generative AI, and particularly the language, large language model of chat repeated a kind can be integrated into educational practices. I am maybe more well known for studying that aspect of human interaction in the field. And I can comment on the speculative promises and also the risks regarding when the practitioner actually embrace AI into their teaching. And also I have a load my student in computer science 344 introduction to human company interaction to actually use chat GPT or any tools in their in their coursework. So I have gathered some survey data to get a sense of what's happening in them. So I can come in phone my my experience regarding this question as well. So this question, if you still remember, actually has two components in it. One is how we're going to integrate this kind of new AI, powerful AI tools into the teaching practices. And second is about personalization. I think they are relevant, but also I would like to break down those into separate thread of conversation. So integration is a very important issue to think about. And here what we mentioned is integration, not just about fighting against chat. As Celeste mentioned about more and more integrative approach and embracing embracing attitude. And also it's not just the, you know, mindless allowance of letting students use chat GPT and they see what's going, what's going on. And actually the letter approach really relapsed approach is actually what's happening in many UBC classrooms. As I heard, like the instructor actually had no leverage to control how students are using it. So this is all low and then see what's happening. But I see there's a bit of risk in doing so as well. So then what is integration, I think. So if you want to distinguish integration from the other end of spectrum, I think you can think about your educational endeavor in two different ways. What's going to happen is AI independent learning that should happen in students learning. And the other is AI integrated learning that should happen, which is contrasting to AI integral independent learning. I also call AI independent learning as AI invariant learning too. And just giving you an example for you to understand what are these two concepts are. You can think of introduction of calculator in about 80s and 70s. Calculator was very useful tool. Everybody knew that it's going to be a tool that's going to be used in professional settings. So we have to teach students how to use it for arithmetic tasks. But also it raises concern and debate about how we're going to teach arithmetic to students. So when it comes to calculator, there's a calculator invariant learning that should happen. Students should learn how to do addition like basic arithmetic and know the concept. But also there's consciously calculator integrated learning approach that you have to teach how to use calculator for their actual task that is relevant to their professional job in the future. And I think the similar parallel exists in AI domain too. Why? Because AI use is very predominant in the professional setup already. For example, code generator about 80% of the coder like software engineers are already using AI based software generation tool, code generation tool like co-pilot. And it just has become part of their everyday tool. Coder use this integrated develop environment and they're existing like co-generating plugins so they can just type in the code a little bit and then just give you the whole function or whole file set that they can run and test. So it's becoming already part of the really the same for the writing tasks, the same for many other tasks that you can imagine. And also going beyond the language, there's an issue of image generation or decision making as well. So we have to teach them how to use it. So I'd like to highlight the importance of distinguish between these two concepts AI invariant learning and they are integrated learning and more so it's not just about those two concepts. Coder should envision what they're going to teach going to be relevant or not, or how it can change in the new world of where the AI use is going to be normalized because, well, we are living in a capitalistic society and these AI tools are essentially working as a productivity booster and that's why there's so many hype about it. I'm saying this with recognizing all the side effect and also concerns and risks of all of those capitalistic endeavor and corporate greed, which is super concerning too. But, well, there's a trend and I think it's going to be the technical trend to embrace those further on in the professional job. I think maybe I spent too much time on integration. If I got to mention something about personalization, I'll say, if generate AI going to be going to make a big impact on education, it's going to be because of the personalization. And this is a thought based on not the recent trend, but actually an old study called the blooms to Sigma study. Do people aware of what blooms to Sigma problem is? It's a study from 86 where Benjamin Bloom, who is, you know, maybe one of the most famous educational researcher, education researcher. Yeah, and, and he tested, he compared the performance of student in a traditional like one to many classroom where a student, an instructor teaching 40 students or something, and compare student performance in the one on one, like learning in a mastery learning context to And what Bloom identified is that if you can offer to a student of a similar quality, similar talents, if you can offer individualized learning, not the one to many, then their performance can increase by two Sigma of the population, which means on average students in one on one learning going to be 98% of this in a big classroom. Right. The same students can get a plus or a zero while they can do average in traditional learning and this length learning tools are actually serving as a tutor or instructor or teaching agent. Of course, currently technology is not perfect and it has any many issues like always nation factual errors and also over dependency issue. Those need to be addressed. I believe companies going to work on addressing them. But if that happens, then there's a hope that he can reduce a lot of work for the instructors and provide a higher quality education personal education to the students. With that price. Yeah. Thank you. Thank you for sharing. The word sentence is very long. I've been trying to stop him because we still have to listen to the next two panelists. Thank you. I have a lot to think about because there's something that he talked about what we are going to teach in the classroom when it become normalized. I really, I don't teach until the summer. I still have a few months to think about this. Let me share my screen so that you can see the questions coming up. So we are going to look into creativity. I think this is the most exciting part for me. Again, I haven't really thought about it as a science scientist. So come up next we will have William and Connie and how may we foster students creativity and independent independent thinking. When we have all these tools when I can do so much individualized learning for us with us over to you, William and Connie. Great. Would you like to go first on this one? Sure, I can kick off. This is a fun question for me to to respond to. So I, I teach a course called creativity. It's a required course and many of the graduate programs of solder. And just a quick amendment to what Judy said, I'm a full time lecture, not, not a special and the group lead of the long business communications team at solder, but also do educational design work as a consultant outside of UBC. PhD is actually in in design and how design and systems intersect with educational spaces and how we can enact our creativity within within those spaces. So, when I think about creativity with students. I first flip the question to myself, and I, and I pose it to the colleagues like, what is our relationship to our own creativity as educators. What do we currently deem creative in our classrooms. And how does AI and the emergence of generative AI both assist or really challenge the grooves and the habits wherein in in our own creative work. I wonder, questions around trust are really interesting to me when I think about creativity. Do I trust myself with it. Do we trust our colleagues with it, because faculty are using it quite a bit too. Do we trust our students with it. Do we trust students at all. Do we trust the people who created it. Do we trust our information in it so I mean so much of how we get into places and spaces of flow and creative process both with students and ourselves are bound in in questions about trust. I often think that our relationship to our disciplines into the topics that we are scholars in are like our own personal love stories. You know, we have to really follow the love with our discipline we have to fall in love with our craft we have to fall in love with the topics that we that keep us up at night, and in December of 2022 for me. AI and generative AI emerged and had a seat at the table and even now I sort of symbolically have always an empty chair in my classroom to kind of have a physical presence of there is something else in the room with us. It is not machine, it is people who have created this amazing, just amazing thing that it is there. It's ubiquitous whether we want it to be or not. Do we see that in our disciplines as a betrayal. Do we see it as a new chapter of our love story. Who are students in that story. What are they writing alongside us in the story of our disciplines. How do we really start to push past the us them dialectic that we often get into a students and instead think about we together as a community of novice to experts scholars with this empty chair in the room with us that we're all trying to decide whether it should be there or not. So, for me that's where creative process starts those types of conversations and I'll just end with saying that I had a student recently tell me that in the creativity course that I taught he came back and said I think the thing that we did the most in the classroom that is helpful with AI is we came at conversations from mindset of courage and optimism and that to be creative in this time with everything going on so much going on in the world. It requires immense amounts of courage and optimism and the definition of creativity according to scholarship is that it is the production of something that is both novel and useful. Creatively and optimistically trying to find utility in this very novel thing, because it's there. And it's something that I need to, I need to work with, but I do still think as scholars in the academy we're debating its utility we're debating and getting quite personal and critical of each other about whether we believe it's useful, but it is not all. Thanks very much, Kari. So, by way of introduction, I believe that there's a philosophical essence to things and pedagogy and teaching and learning is one of the things for which there is indeed a philosophical essence. Now, I'm a medical geneticist by training. I solve rare genetic diseases as best I can, and do my best to diagnose patients with the strangest weirdest diseases that are out there diseases that don't necessarily even exist in textbooks yet when I'm called upon to diagnose them and when many of my colleagues are called upon to diagnose them. Now, in my capacity as an educator, I have taught the graduate course for medical genetics that includes an in depth discussion of the human genome and its structure. Of course, we have assignments as part of that course that students had to produce written essays that discuss a scientific topic. Now, the challenge then is, of course, marking those essays essays and trying to have my own understanding of what the students understanding is. The challenges that I have with generative AI in that context is that I think we can all imagine the optimal way that AI might be used and, you know, kudos to Halima for highlighting that that it can be used if crit if used critically where the students are really thinking about the text that they're that they're generating and thinking about the ideas behind it, the science behind it, the actual causality behind the what they are asserting in their discussion of a scientific topic. However, it is very often not used optimally every everything that I've read about generative AI, at least in this early generation has suggested to me that a significant proportion of users basically use it to scrape what's out there produce something one could argue it's novel or one could argue it's derivative right something very close to average and often below average and submit that simply as a time saver. So I think that we do need to have these kinds of conversations about what is the nature of the assignments that we give to students and what is actually the nature of the assessment. Is the point of me asking a student to write an essay to have the product is it to have the essay at the end. I mean, not really the point of that exercise is to produce the novel understanding in the students mind, not only of how to write, but how to think. And I see writing for an audience as in many ways the highest form of thinking because you really have to get your thoughts in order. Decide what message that you what the message is that you want to get across and make sure that that is actually intelligible to the audience. So that's my, that's my contribution and invite questions and comments. Thank you very much. Thank you all the panelists and I'm going to spotlight you all. I know I gave you a very short and I gave you assigned you a very short amount of time to speak and I actually cut you off too. But now the floor is open. You can be small to each other we also have questions on Slido and I also see so this has started responding in the chat to. Yeah, may I ask you to expand and elaborate on what is out there. Oh, I was just responding to Williams comment that I think it's important because the students are taking this text from chat chat GPT and they hopefully before they employ what's written think about where that data came from. Right, because we have to think about the language corpus and what's there, and a lot hasn't been written we have to look at those voices and right now chat GPT doesn't really tell us the provenance the data so we really need to consider that before we use this output that's produced for us. If I can build on that, so the ways that I bring it into my classroom is as a tool in process rather than a. Invitation for folks to use it for final output or final product so I mean I'm not naive they probably are using it into crafting. Final assignments but those are also not as heavily weighted as process pieces so that I find is allows me to have it in the room with me to be able to start to pose those challenging questions that you're asking like what what is this data where is this data coming from and to stay. You know, in that space of curiosity that you you really set a lovely foundation at the beginning of your introduction around how do we just stay in that place of curiosity. Because I'm very curious about that data set, but also curious about the mindset that we have that this has been a built and crafted and created thing it's not a sort of neutralized digital source somewhere. So if we bring it into if we bring in it as a process tool, for example, you know, think pair share, you know, think pair, pair with AI, go back to pair, then share and ask them like he questions in that process. There's a criticality we can come at in a way that it can be in the room that isn't about just always about the other AI which is academic integrity of a of a submitted document so that's that's that's one place where I really kind of that data source piece and what what's behind what's behind. Wouldn't it be great if chat GPT five told us where the data came from. That kind of acknowledgement would be amazing and I think that's one piece of explainable AI that we haven't talked about that is just crediting the people. You know, whose data is being used, and perhaps to even involve them in the conversation of do they want it there do they want it in the aggregate, you know, looking at we're decolonizing decolonizing so much of our curriculum, it would be interesting to see how we decolonize these large language models. I'm just trying to catch up on the questions in the chat. We have a lot of questions. So this, I noticed that you unmuted yourself earlier I let you go but I also have questions about how do you check your students that book. You mentioned that. Yes. You mentioned that you're going to check it. Yeah, the map folks. Yeah, lab notebooks. Lab notebooks are still paper, pan on paper, because of all sorts of considerations in a physical lab safety considerations and else wise. So, before a student leaves the lab every week they bring their notebook to our TAs to the TAs, and they sort of scan through make sure that the students have completed all the things they have to complete in their lab notebooks, which are sort of open form there's not prescribed things they have to complete but they have to have their methods and materials their data collection and then they write a reflection at the end of each lab. And then they get a sticker in their lab lab notebook and that signals to us that they were there and participated that week. So that answer that and then three times we actually collect them for the week and the TAs actually read through much more diligently and provide feedback. So that's sort of a sort of a special thing in in science I wouldn't expect that to be a broadly applicable thing but that's really the net and bolts of understanding what's happening in science more than their writing is. I was I unmuted myself because I wanted to address this comment question that to made which I thought was interesting about AI enhancing our comprehensive comprehension of the broader world beyond our the immediate physical classroom environment which I think is a really interesting point. And I don't know, I mean I don't really have much to say about that I just wanted to throw that out there that I think that's a good, that's a good question. I don't know if you know this but I have questions on Slido that I think you may have something to say about that. It's related to calculators. So on one of the Slido questions. The person said calculator has have little to do with humanities learning and teaching, but chat GPT is transformative in each and every aspect of the comparable. Of course they're not. Yeah, of course not calculator was metaphor that I used to help you imagine the relationship between the core component that you want to teach in your course versus what you have to facilitate into so they can integrate AI in your work. It's not calculator of course because it's much more working in a probabilistic way, meaning there's high uncertainty in its operation. Also, in many ways, they take the form of human like figure, for example, chat GPT why are you chatting with this agent thingy instead of giving it input and getting output. And also the advanced models or advanced apps are allowing you to talk with voice and then you can hear it's voice has boys and gender now. And also the range of job we can do is much more much wider than calculator right so it's not a calculator. But my point there is that because of its powerful nature is going to be used in the field, and it is being used in the field now. Then what you're going to do about it what you make sense of your education. If you don't integrate AI into your learning, which might be less relevant than than the new version that has AI in it. In that sense, you want to critically reflect on the value of your educational endeavor. And you're going to find the core company you must teach without using AI and they need to be assessed without AI. For example, you can monitor how they're using any classroom or in an exam session, or you can try using this lab not thing. So that's has introduced which is a great idea I think. And also you have to distinguish that between against the what you teach with AI. I hope that answered the question. Thank you very much. And I see another question. And I know. So this is a question about the major barriers for faculty educators in attempting to integrate AI into our curricula. And anyone unmute yourself. Yes, I think one of the major barriers is is what freedom and flexibility we feel we have, depending on our own position within the institution. So I say that as contract faculty. You know, I have a great contract and full time, but it's still a contract faculty not tenure track position. So there are barriers there institutionally and how brave we can feel and how we can see our teaching in our class rooms of spaces where we can have the ability to try things that may not result in a perfect process. I mean, a lot of what we're bound by in it as as contract faculty non tenure track faculty is the metric of student evaluations and student experience instruction metrics. And that that's risky for many folks to want to do things really differently or to want to make our classroom spaces filled with unknowns versus knowns. So I would say that fully tenured faculty are in the easiest position to both take risks and embrace the tool but also are in the easiest position to turn away and ensconce themselves in a position where they don't have to use it at all. Because there's not a lot of consequence there, whereas for lectures, sessionals, grad students who are emerging into the academic market. Undergraduate students who are going into job markets where AI is everywhere, you know, entire communication teams are being now integrated with fully AI processes. We just simply don't have that choice. So the biggest barrier I think is institutional support often. I will say that one of the reasons I felt so able to use the tool and to explore it is because of the incredible leadership of our current senior associate dean of students, Alicia Salzburg, who's being the one since January 1, 2023 to really work with teachers and lecturers and students in this domain. And as an associate dean, she is also a lecturer contract faculty. So that's an interest that that to me is a really powerful person to have represented in the dean's office right now. I know one of you wanted to respond. I wanted to just make a comment of several things sort of going on and in the chat. And that is at this stage, at least in my experience using chat GDP for generating ideas, science writing, I've used it a couple other areas as well and I've watched my kid use it in the community some and statistical analysis is that it's actually not great. It's best at statistical analysis but the rest is kind of. So I think that we sort of need to proceed with caution I guess about this assumption that students are just going to be able to have it create this amazing thing because it does take a lot of babysitting still. But having said that, I think this is the thing where 15 years from now, you know, obviously it's going to get better and it will be able to eventually these, these AI models will be able to hopefully accurately cite sources, for example, and not be so obviously written by AI and writing things so I think, you know, there will be improvement going on but as it stands right now it's. It's not sort of a standalone thing in my in my experience mostly. William, and then I know don would like to say something to but don with terms is going to let you hold that and William. Go ahead. I mean I'll just mention my concerns about something that Corey doctoral referred to as the in spedification of the internet except he didn't use a P use an H in the middle of that word. So, yeah, I'm worried about the and spedification of AI over time as people tend to rely on it more and more. It's the I mean it does partly address don works point about how when you have such a tool, and it becomes so useful that everybody piles on it it becomes almost impossible to imagine using anything else. I think I would invite everyone here to imagine how often they use Google as opposed to duck duck go, or some other competing search engine. Right or how often they use Amazon or Walmart to order things compared to ordering direct from retailers. I think there is a danger in everyone piling on to one particular large language model and saying this is going to be the one that we use. There will be other ways to monetize it etc. Thank you. I know this is really a round table. We are getting spin into so many different directions. There's so many topic that we'd like to respond to. But I know you were going to response to something on curricula. I wanted to respond about curry and Celeste's point about how AI tools going to march and actually climb up the power hierarchy in educational system, which I just thought of my speculation and expectation speculation, which I hope would not happen but it's likely to happen. And I think it's related to the colonial nature of these tools where it leverages the existing resources without a lot of consent, without proper consent under low suit, I believe, between the author institution and open AI now. And they leverage it for a corporate interest and of course this corporate like big tech companies as they're saying that they're using it for making these two more accessible and usable and powerful, which will enhance the productivity of society as a whole. But also, there's a concern about marginalization of individuals as Carrie beautifully illustrated. And one point that I want to highlight there is typically this marginalization happened from lower end of the power structure. Let's say in university, there's a whole decision makers like tenured professors and those were, you know, more vulnerable positions and then there are TAs and on the great TAs and so on. And these tools, as Celeste has mentioned, they're not perfect, but they can do a pretty decent job in those tasks that human can do. And there are existing studies that show these tools can actually sometimes do better, like giving a superhuman performance for certain kind of tasks. For example, if you just let it do a very mechanical or laborious test that doesn't require a lot of deep thinking, but those that you are asking your TAs to do, it may can do a great job. And as William said, if the whole structure is getting addicted to that capacity, then we may see the need of giving less job to the TA. And as the AI tools capacity increases, what they can do in a climb of this ladder and it may can impact the other type of other tiers of the workforce. So there's a real threat to the actual hierarchy itself. And further on, I think the other relevant aspect is the threat to train. What if these tools are eradicating the need for TA? These tools are so good, every instructor uses AI TA and there's no need for TA. That's going to do a budget cut for the university, but that's going to remove the future academics who are excited about their experiences in interacting with who they taught. If I remember why I became excited about teaching, it's based on my undergraduate teaching experience. But if I don't get that opportunity, then this removal of shadow learning going to lead into the deeper problem of the work structure I believe. Thank you. So much to think about. Hallie Ma, yes. I've been trying to narrow it down to just one concern and I've narrowed it down to two. My two greatest concerns are transparency and interpret but interoperability and I say this big data science. So learning analytics using machine learning algorithms to process educational data. Specifically, one of the things that I've been looking at is how people understand visualizations of data. And I think that we have, we have an issue basically in that do people understand what they're looking at? Do they understand, especially if they're using these types of tools for the interpretation of data? Can they critically question the results? This is a process where we absolutely have to have humans in the loop, meaning that we lost Hallie Ma. We are going to hold a thought and then we have another. William is ready to respond to a question on slide. William, what is, can you? Thanks very much. Yes, I wanted to respond to the question does generative AI highlight a conflation of knowledge and competence with communication skills and confidence and how would that distinction be teased out? I mean, my short answer to that question is yes, I do think it highlights it and how to tease it out. That's not very, it's not really clear how we would tease that out without a lot of intensive work in terms of in order to actually assess a human being's understanding of a topic in detail. You pretty much have to interact with them one-on-one for an extended period of time verbally. If you want to make sure there is no AI intermediary that's coaching them on emails and coaching them on how to write, but if you want to actually assess the understanding in someone's brain, you actually have to interact with them one-on-one. Even then, humans are good at gaming the system. All one needs to do is look at recent history of various CEOs who have brought products to a highly speculative nature of an investment stage. And then the whole thing has collapsed as a house of cards. A number of companies have gone under for exactly that reason, the conflation of people who were good and confident talkers with competence in what they were actually trying to do. So thank you for the opportunity to address that question. Hopefully the person who asked it considers my answer satisfactory. Well, all the audience, you are welcome to unmute and ask. We'd love to have the interaction. Even though we have Slido, we have like on my phone looking at the Slido question and there is all this chat going on. So, but let's not forget the other way of communication. Just unmute yourself and ask. Don't be afraid. We are not going to spotlight you if you don't want to and you can remain behind the camera too. Canvas, is there any other questions where we'd like to elaborate because sometimes it's hard to type in the chat? Yeah, this is an interesting one. Maybe Celeste, you put in the chat that there was a comment, was it on Slido? Surely the biggest questions facing instructors right now are how we evaluate student work. Everything else is interesting, but not important. And you wrote, my evaluations are exactly the same. So what, can you just clarify? I just am curious about what that is. Yeah, what's going on in that there. What my student, so the structure of my course has not changed at all other than giving about 30 minutes a couple times of the term of showing students how to use this students that haven't used AI before if they choose to. Other than that, there's been zero changes in what my students do, how we evaluate their work and their assessment. So I know that's very specific for my field and what I'm doing. I had a comment earlier in response to something that I think Jennifer McCormack that you posted around around the different platforms and the different outputs they produce where I've really shifted in my communications courses. I've shifted to first year common 96, which is the first year writing and public speaking course for first year solder students and where I'm really going to be shifting in term two is thinking about writing in terms of the input into the spaces versus the output so prompt crafting and what is a prompt is creates a huge difference between what is like produced as the output. But I also pose and will pose again the question about voice and I and as a writer so much of what you're doing when you're writing. And I don't teach science. I teach, you know, professional communication and creative process and writing in that domain. So much of it is about writing as a process of finding voice finding self and writing your way through a something stuck that emerges as unstuck and and in my own academic work writing was a research methodology so writing my way through my, my thinking was was really important. And one of the pieces that I'm curious for us to explore is like how do you find voice and identity and how you prompt and how you draft craft and draft a prompt. What do you need to know to be able to draft a great prompt versus just cutting and pasting something you saw some tech guy on X post that's going to like make you $5 million. What I'm curious about it is what in 20 years what is this tone of voice that people will be inputting into AI to emulate you. So I can right now say, Okay, chat GPT please write my shopping list and the tone of Sylvia Plath or in the voice of Bell Hooks like I can and it can do this amazing stuff with my shopping list that is truly quite beautiful. My students are the future Bell Hooks and Sylvia Plath. So how do I as a writing instructor work with them to continue to find identity and voice in their writing, so that we're not room. We're not going to be filled with just a lot of cannon that ends in 2023. William and Salis. If I know I'd like to speak a bit to the issue of trust that darker Dr. Mark and raised. I invite everyone to consider why you believe what you believe, like the beliefs that you walk around with in your head, your value systems. Generally, I would argue you believe them because they've come to you from some type of source that you trust, whether it's parents cultural background other members of the university etc. When one reads a chunk of text, there is an implicit assertion of value that that text is making. Now that may come from a human who wrote the text or a group of humans who put that text together like say clinical practice guidelines on a particular disease. If it's coming simply from sampling everything that's out there and basically producing an average. I'm also concerned about the drift towards average and and the ability of people to game that system. So, I'll, I'll, I'll invite folks to consider that I don't have a specific answer for it. If I can comment on the issue of trust that I think trust should come from. Inclusively, this assessment context and education. It should come from those who bear the responsibility of what they're offering. It can be the writer who is using such a pity might have used such a pity or it can be a student. They. So that's why I literacy education is important. It's not just about how to use AI, but about what it comes with AI use, such as that the user bears the ultimate responsibility. These are tools, though there are super tools and the trust should arise from and and between the relationship between those involved, including instructor and student in educational context. So that's the comment that I want to make. Thank you. I saw that you amuse yourself for a very brief moment. Oh, I don't know why I forgot. Oh, I think I think I'd already I'm responding in in the chat to comments and I think it was one of those that I've already commented on. Thank you very much. This is so much going on. Still, still looking at the chat and looking at my phone, the slide or question. We still have one question on top of the slide or list that has five up vote. If AI is allowed or even encouraged to be used in students learning with deprived students of learning opportunity to enhance them my development. I do have question about what my development means, but we'll seem to you. I mean, I'll go on that one, but I'll modify it so as not to be so dogmatic. Again, if if coached appropriately on how the tool works and what the actual output is namely sampling of the average that's out there. One could produce students who understand how to use generative AI to produce better product. I don't know that you could use generative AI to produce better understanding other and other than in some kind of meta sense, pardon the pardon the pun. But you know, understanding is intrinsically difficult to assess. I mean, there's been a couple of other questions in the chat about that very hard to do other than one on one. I mean, you know, full disclosure almost all of the evaluation that I do of students and learners such as residents who are training to be specialists is low throughput. It's one on one very, very difficult to do at a at scale. I haven't solved that problem of how to do it at scale. I invite comments from the others. If I comment about that, I think we are talking about very old notion of constructive to concept of learning where the learning happens inside. Not about just the behavior measure. So yes, there's a risk if students depends on using AI, then they can lose the inner ability. And that's why I believe AI invariant or AI independent learning is important. It must be assessed without using AI. But also, if I can take a little more progressive or adventurous stance, I hope you can also think about the metaphor of people using computers these days or search engines, then one might say in 80s education, oh, that's a cheating. And you're not using your own ability to go to the library and then identify what's relevant. You're just typing in this keyword and then you're just copying and pasting everything that's coming up on the website, right? So the point that I'm making here is, of course, generally is not exactly the same as the internet. But if you want to assess somebody's capacity, then you should identify the context you want to evaluate their capacity on. Meaning, do you want to assess single human without having any tool? Or do you want to evaluate somebody who is using the tool? And I think that's going to be the fundamental question of education. I think this also speaks to how we handle uncertainty. And I think that's something we should be asking our students to do is what's your certainty in your answer? What is that built on? And then that goes back to what William was saying about the criticality that we're employing of, do we trust this data? Where does it come from? Do we trust this response? Do we have some kind of feeling that it's that unsettling feeling where we don't quite trust? And then this goes back to what Carrie was saying. Do we trust what's there? And if we don't, let's explore that. Let's help students build that tacit knowledge where they are building their expertise and they're developing trust in themselves. And I think that that's something that maybe these generative AI models has given us the opportunity to add some things to the way that we assess student work. And I apologize for cutting off earlier. I found myself talking to a black screen. So I don't know at what point my computer died, but so sorry about that. I think Skynet doesn't like what you're saying, Elima. That's the problem. It takes like two minutes and I've had several people write privately and ask specifically about how to address this with students, how I address this with students before they're off to the races in my class. So I just want to take like a minute or two and it's very simple. So what I do, I've done this both semesters the very first day is I share my screen and I put a prompt in that's tangential to but not super specific about what we're doing in class. So it's around salmon, so it's around salmon and the importance in the ecosystem. And I ask it to write like two paragraphs or something. And then I just let the students sit and read it and comment on what what they think it's done good and what even day one what they notice that it's done bad so they have a list right away of they know immediately that it is super repetitive it's very superficial all this stuff. And then usually someone asks notices that it doesn't provide references automatically so then we ask it to give us some references and then they spend a few minutes and realize that half the references are made up, or don't say what they think it's going to say. So it's pretty it's a pretty easy and pretty simple thing within like 30 minutes they they're set to go at least for my class and can and know that critical things to watch out for sharing this message. Yes, can I just add something about the question about student learning that that we're building on so one of the thoughts that I've had is is. drawing on the universal design for learning literature about how these tools can be both enhancing accessibility and equity and learning process in our rooms in our classrooms, while simultaneously reinforcing systemic bias and systemic oppression within its drawing on your data set so I try to tease out the two I try to look at the data that it's trying from is one particular piece that I interact with critically I reflect on critically as an educator, I try to connect to the critical pedagogies that I'm steeped in and really think about the ai's outputs and its privacy and all of those pieces and the ethics of it as a piece of work that I need to continually critically engaging in, but as a tool as a pedagogical tool. It also attends to many of these systemic biases and passions in how it creates ease for many folks in the system who have not had ease, so it can create spaces where students who are neurodivergent or who have English as additional language. There are ways that it just has created in my classroom, a very quick leveling of confidence, where we can use this tool to help with editing help with revising help with creating a really well designed PowerPoint for a public speaking presentation. And these things are not taking all of the creative and generative energy of my students, especially those who have found the real detail orientedness of scholarly work to be a real barrier and their ability to launch. So I find myself torn, I find myself very torn, and I can't say oh because of this I'm not going to use it, because some of those same issues apply here. And I've had interesting conversations with students who have, I had a student once demand that we not use it at all in class because of all the ethical and biases and issues in the data set. And then I said yeah but I'm also thinking about this pedagogically and there are ways that this really has created equity and reduced harm in the education system for some of the students in this class who find there to be quite a few barriers and for faculty. So in many ways it's really just to name it's really helped me, it's really helped me in terms of creating a leveling of certain pieces of the scholarly process that have been really challenging for me. I think it's helpful to consider ourselves all beta testers in a system that's not quite final. So if we take that perspective, you're a beta tester, you're going to find bugs, you're going to possibly break the system that it's a much more secure place to come from. I have a comment about both point made by a Korean helmet about this notion of beta testing. I wonder if you heard about the notion of permanently beta, which has become a norm in these days tools where every tool that you use in online is beta testing and you are actually both a user and the product you are the product if you're not paying for it. And that's the same for these tools as well. And why? Because they're using your data to train better system and advancing it, making it more powerful to garner more users. So you're in a sense no more a user, you're a UZ in this sense. In education, let's talk about UBC context. CTLT is a blessing to UBC, I believe we have so many competent researchers and officers they are working to enhance technical support for UBC classes. And I think I heard they're working on it, but one thing that I really is important to embrace AI into UBC education for the benefit care mentioned such as accessibility and better learning and learning in the field. It is the UBC version of LLM, like language model we can use consistently. Why? Because of the model variability of the performance and also pay options. If I don't know whether you have used GPT-3 compared to GPT-4, GPT-4 is a paid version where you're going to pay $20 USD and you can get a better performance. Actually, the performance difference between GPT-3 and 4, the paid version is like word difference. It's like seeing an essay if you're going to let it do your assignment, it's like seeing an A0 to A- student versus maybe C to B students. So, if we do want to level the play field, then we need to let us use the consistent language model, like chat GPT or you can say chat UBC and we offer it as part of educational infrastructure. And that's going to be a lot of investment. The other reason why we have to have our own language model is because of the privacy concerns. If we just let them use this, like OpenAI, all the data is crossing the border and it's FIFA violation. So, in order to stick to the university policy, actually, in order to officially embrace language model, the instructor should designate them a Canadian language model they can use. Otherwise, who knows what they're going to put in which kind of data. So, I think we have to work on it. I think I heard there is a Gen AI committee at UBC that are working on it too. I don't have any information about specific progress that they're making right now, but I'm hopeful that they're going to do because they have to do thrive in the next AI embedded education domain. I hate to cut this off, but we are 10 minutes to 11 and we all need a break. We started off with a very welcoming welcome from Elisa. I'm looking at the full level of Gen AI in our life in our society. I really, really want to listen to your comments on the next upcoming questions on Slido. I'm like, how are we going to evaluate students understanding of Gen AI when I don't even know what Gen AI is myself. And there's so much that I would like to talk about too. Students, can we really beta test on our students when the grade will stay on the transcript forever? There's so many questions, but I would like to take the last minute to thank you, our panelists. The time, the welcome gesture every time when I emailed them, asked them to get the input and we had a meeting last week. Thank you very much for sharing all of your knowledge and what you know about what's happening around you in your discipline. Thank you very much.