 I am an education consultant with the Center for Teaching, Learning, and Technology. I'm facilitating this workshop with Bosum and Lucas. I am situated at home now and on the Ansela Traditional and Ancestral lands of the Misgrim people. I would also like to acknowledge that one of the many issues that we face with the use of generative AI is the homogeneity of the information. How large learning network work is they, by using statistics, they are putting some popular words together and putting it together, turning it into a paragraph and as a piece of work. And so the mainstream idea and concept will become even more mainstream. And therefore it is more important for us to put in intentional inclusion of indigenous knowledge and understanding in our work. Rather it's just as a form of non-enlightenment or when we are asking our students to think about the work, the contribution, and the impact in the society. So I wish when I was reading the Indigenous AI position paper, one of the indigenous communities goal is to create a future with AI that foster all humans and non-human kinds. So that's one of my goals here today, to think about issues and challenges of AI. So I will leave me running this model, 30 minutes of presentation and 30 minutes of discussions for the last few weeks or a couple months. And so we expect a lot of participants. So when you can listen to us and please pause, let us know if you have any questions. I will pause and let you ask the question. You can always ask questions in the chat. Those I will be monitoring for us and look for us. And you can also chat among yourself in using the Zoom chat. So it's a 30 minutes presentation and 30 minutes discussion, but we are going to slightly modify the format a little bit. I'm speaking really fast now because I know 30 minutes is a very tight constraint time, but I am really trying to, we have a lot of information and I also want to give you everyone a chance to do some discussion. So now in the acknowledgement and our agenda, we will start with a little bit of brainstorming at the beginning. And then we will go into the presentation mode and then we will also do some brainstorming and discussion at the end. I purposely flip the color of the brainstorming at the beginning because Lucas will be leading us doing something quite different, something that we haven't been doing in the last few workshops. So just stay in mind that it's quite a little bit different. We ask you to think in a different way. And then next, and then learning objectives today is all right. By the end of the session, I hope you will be able to recognize three major, there are many just like Lucas mentioned, major issues that we face when using AI in our classroom, you know, teaching and learning environment. And then we may be able to co-create a list of small actions that we may take to mitigate these issues. And the very exciting activity. Perfect. So I'm going to jump in now. And we're going to do an activity just to start things up. And kind of our goal here is to think a little bit about, you know, what your issues are and what you think are issues with generative AI. But to get at this, we decided to use an approach called TRIS. And can I get a thought, this is a liberating structure, and it's basically a facilitative approach. Could I get a thumbs up in the next year name if you've used TRIS before? Just going to take a look here on the list here. All right. So I see a couple x's and one thumb. So this might be new for you. The idea behind TRIS is that instead of brainstorming and thinking about something in the positive, we think about something in the negative. And we do this as a way to kind of refine our brainstorming and think about it a little bit differently. So we've set up a padlet that Bo Sung has put in the chat. And we're going to kind of do a thought experiment together. Let's think the opposite way a little bit. As educators, how can we misuse or how could we misuse generative AI in a way that compromises student learning, hurts academic integrity, undermines equity in learning, and violates the rights and privacy of students, researchers, and creators? So I'll let you open the padlet now, and I'm going to stop sharing in a second. So I'll get everyone to jump onto that link. Again, here's our question. How could we misuse generative AI in a way that compromises student learning and academic integrity, undermines equity in learning, and violates the rights and the privacy of students? So Bo Sung, do you mind adding that question to the chat as well so people are able to refer to it? And I'm going to stop sharing now. And I'm going to get everyone to go to the padlet and to start adding, again, their answers to basically if we were going to really badly implement AI at the university, how would we do it? So I'm just opening the padlet now. And for his question in there. And what we've done on the padlet, you'll see we have some different columns here. One for violate student privacy. A wildcard column, if you can think of a way we could poorly implement AI that's not on this list, as well as an equity column, and a student learning and academic integrity column. This is great. So, you know, I'm seeing lots under each column here. Use AI to create teaching materials. That's an interesting one. With the person who wrote that one, do you want to kind of are you able to jump on the mic and explain it a little bit more? Yes, Lucas, I can. Yeah, please go ahead. My thinking around that is when you put in a question or a prompt into AI and putting in a prompt as to what is the objective of that lesson plan. And when AI generates a lesson plan using that instead of your own intellectual thinking and using that for a workshop on our teaching class. Wonderful. Thanks for sharing that. So, it looks like a number of folks have added and I'll just kind of go over some of these, I'm sure you can see. So, ways of compromising student learning and academic integrity, becoming a crutch that replaces brainstorming and critical thinking. You are using AI detectors to try to catch students when we know these detectors are problematic, allowing free use of AI, diminishing thoughts and learning. So, a lot around this idea of affecting students' learning and kind of hurting their ability to learn. No need for students to reference generative AI, using it for teaching material. Don't provide information about what might be inaccurate or what might be a hallucination. Use to write papers instead of students. And then equity. Set assignments that can be completed based on prompting skills. Fascinating. Mandatory usage when students may lack technology access. Paid versus free system. So, the difference between paid and free, you know, chat GPT, etc. Making assumptions about computing skills. Make students use tools that are costly. Data storage outside of Canada. Pay students' work into gen AI to grade it. Use personal email or addresses. Use people's names. So, thanks so much for sharing that. And I'm just going to stop sharing now and just give you kind of a thinking exercise for about maybe 15 seconds is, you don't have to answer this, but what are you doing? What is the university doing now that resembles this list? And again, I won't ask for answers on this, but I think it's an interesting chance just to think about what we might be doing already. And I, you know, I think of teaching materials for myself and how I've been using it to develop workshop materials and some of the challenges with that. Not this workshop, of course, other workshop materials. So, what we're going to do now is we've kind of set up the problem a little bit, and thank you for completing that, Triz. If you're interested in your teaching, I find it an absolutely fascinating tool for flipping brainstorming a little bit. And now what we're going to do is look at the issues, some of the issues around compromising student learning and academic integrity, undermining equity and learning, violating the rights and privacy of students and creators. And I'm going to turn it over to Judy. So the first thing is compromising students learning. So how do we know? How do we know that our students are learning, even before we had chat GPT or other generative AI tool, right? So, but now it's here. They are here. So this is a very reason it was published in August 30th, just before the term starts, by KPMG, they surveyed 5,700 students. And among the 5,700 students, Canadian students age 18 and above, they are using generative AI to help them with their homework. And despite 60% of them feel that this is considered to be cheating. And this is what the finding shows us. I am going to say a little bit more about the information later from this survey, but let's go back to learning in our classroom. Learning in the classroom, we are, as the instructor, I feel that we have the role to let the students know what we want them to learn, what we hope that they will achieve at the end of the course. So many of us should be quite familiar with the Bloom's taxonomy. I'm only showing the cognitive period here. And so from the bottom up, we have remembering, understanding, applying. So the learning skills is in getting higher and higher order. We need the foundations, but we need to learn something and then understand it before we can apply and analyze it. So this is our Bloom's taxonomy. And this is a way to help us guide our students' learn. With the emergency of generative AI, many of us may believe that we are able to let go of a little bit of the memory. It's the bottom of the pyramid. Genitive AI is almost like maybe even better than Google. And some research has found that chat GPT4 is better than Google result. And it's quite true that generative AI is really good at producing the very basic information now. And not just producing the basic information, but quite accurate. Just don't ask them to write a long essay. The basic information is there. And we may be able to let go of that memorization. We may allow our students to continue to start using more external tools, because with the external tools such as a dictionary and the internet and the calculator, it does help us reduce some of the cognitive demand of our brain. And at the same time, it allows freeze-ups of space for us to do other thinking, more critical refreshments, improve our self-efficacies. And we can also pay attention on the social-emotional environment. Something, depending on a student, something that I sometimes forget that I ignore. I would just memorize my work and then forget about the facial expression or the morale behind some of the materials that I memorize. But then again, I just really want to emphasize that how much can we offload? Is it okay? For example, when we are driving, we rely on GPS a lot. What happened if the GPS is not working anymore? And it's the same, what about the computer is not working anymore? What if there's no more internet access? What will our students do? What do we want them to be able to do as a professional without the computer? For example, something for us to think about. And I see Marcello has a question. Yes, thank you. Marcello. About freeing space for the social-emotional access. I'm curious how that can happen when we are asking people to become even more engaged with their devices. How can you become more socially, emotionally attached to your surroundings when you are encouraging them to become even more attached to their devices to dredge up or drum up ideas or things? Offloading like you're saying they're remembering to that. To me it seems incongruous. I have one example that I can share, but I wonder if anyone have other thoughts that you would like to share too in the audience? I'm looking at all of you. I think it really depends on the person and the need. For example, we know that for a chat GPT, it's even capable to detect the tone of the big paragraph or a conversation. So it may help people who may not have that skill set to improve. Are we asking people to attach to the computer all the time? I am thinking. I am thinking in a situation where I ask the students to exchange the paper that they wrote and then when they have the now that they have the time to have that social activity, the interaction, to give feedback to each other and give critical feedback. Not just like, oh, this is you use chat GPT really well, but more having that interaction. So I'm seeing a little bit of chat going on in the chat. So I'm going to leave it there. But very good question, Marcelo. Yeah, we are dealing with that. And I am also thinking that my kids, they are on. I was at a family gathering yesterday with so many cousins and little cousins, and they were all on the devices together. Anyway, move on. Can I just jump in there really quickly, Marcelo? I was thinking a little bit about and I think that all of this is a double-edged sword. I mean, look at what social media has done with social emotional learning with students on their devices. But thinking about things like the flip classroom where we might be able to isolate things like tutoring around specific areas to these tools, which we're finding these tools are pretty good at one-on-one tutoring, which is something we haven't done before. And then does that create other space? But with that said, I think being really cynical and cautious about any promise that way, and that's part of the reason we want to tap this session. Thank you. Sorry, Judy. Go ahead. Thank you, Marcelo. Thank you, Lucas. We can talk about that for another two hours. But we need to move on. I promise, Lucas, I will pass the mic back to him in like two minutes. So according to the Oregon State University, they also suggest that with the presence of Generative AI now, in order to make sure that the students are learning, looking back on the learning objective, we need to adjust our assessment strategies. They actually come up with something like these changes. If we still ask our students to remember something, we need to change it. We need to let the students know there are situations in our profession, in your discipline, that technology may be limited. And let them know, when it's limited, then there is a need. You need to come up with that figure in front of the clients without any help. What can you do? You need to know what medication that you need to administer into the patient when they are, when they fade it. So we need to explain to them. So really bring in that human skills. What is that human skills that we want the students to be able to demonstrate and amend it, amend your assessment, change your assessment, so that we can encourage those self-awareness and metacognitive skills, refresh them. We can have it happening more in the classroom and really focusing on the meaning, the reason why we asked them to do that question in the age of Generative AI. We asked them to do those assignments, not just a question, but all the assignments that we asked them to do. Make it more real, more authentic. So that's, we need to change our assessment practice to guide the learning with AI. And so that's a huge issue. It's that we need to. But how is another conversation? But let's look at the second issues that Lucas prepared for us. Is that me? I can just jump in here. It's okay. Thank you. So let's jump in. So one of the issues Judy just talked about academic integrity a little bit. And just kind of noting in the chat, there's some really interesting discussions, including a quote from Socrates about cognitive offloading and reading, which is quite interesting to read. But how can we think about equity with Generative AI? And I think this has become quite a double-edged sword. On one hand, we're hearing lots of emphasis on how this tool can create more learner equity. On the other hand, as you noted in the TRIZ, there's lots of potential for really big issues with learner equity with these tools. So, you know, ways that we're hearing that it may increase learner equity. One is by allowing us to create learning materials that can develop multiple means of representation. So an example is right now in chat GPT, I can upload an image and I can have the tool write alt text for me. So the different students would be able to view the image with the screen reader, etc. It's pretty powerful already. It can do alt text really well. Another example is that if I upload a, if I use a plugin for GPT-4, I can take the URL of a YouTube video and it will automatically generate a readable transcript for me of the video using a plugin called Vox. This allows us to, as teachers, to create multiple means of representation. Secondly, I think the idea of tutors, we know that tutoring is effective in education. The idea that now with prompting with different tools that are coming out like CanMingo, students are going to be able to have access to more individualized tutoring. With the caveat that these tutors are going to make errors, their outputs are going to be biased and they're going to need analysis, but already I think students are relying on this as a tool for tutoring. And third, again, something that we're able to do as faculty members now is create diverse case studies, examples, etc. using the tool. So, you know, when you're reading about generative AI, I think we're seeing a lot of these discussed in terms of student equity, but we also need to think about some of the big challenges with student equity. And someone brought this up during the TRIZ activity is the different levels around prompting. So, we have this situation now where we know that a lot of students are using these tools. That's clear in the survey. We know that they're using them for their studies, but some students are able to use them a lot better than other students are. And just a little bit similar to the internet, although I think we've had a lot more time to catch up. And as this quote says, some students are able to make these tools sing. And I think if you see articles online talking about the limitations of these tools, a lot of those articles aren't prompting them properly. So with good prompting, by paying more money, some students are able to get much better outputs. Another thing that I've noticed happening is faculty saying that they can, you know, it's easy to identify students work with chat GPT or with generative AI. I think in some cases, this is true. On the other hand, I think some of the things are identifying as examples of bad outputs and that students are spending more time developing really effective outputs. They're kind of going, they're going under the radar a little bit more. And then another piece that's coming out is output bias. And I think one of the big challenges with output bias is it's really hard to know where these tools are drawing the bias from. This is a search on the tool mid journey, which is an image generation tool. And the search is for computer science professor. And no matter how many times I prompt it, every computer science professor is not only male, they seem to have a beard glasses and be of the same age. So this bias is hidden inside the tool. And it's a little bit difficult to figure out what is biased and what isn't. And we've seen in the AI in the technical field, lots of examples where these biases are built into tools. I think about the discriminatory hand dryer that would only recognize white hands. So lots of examples of how these biases are built in. And what does it mean when the bias is built into our knowledge itself in some cases? So again, there's no easy to answer to these, but a few approaches to thinking about ways of improving equity around these tools is to think about AI literacy and fluency and what that might mean for students. If some students are able to use these tools, what kind of prompts can we provide them? What kind of help can we provide them so that they're more able to equally use these tools? Secondly, is when we're assigning something is ensuring that there's an equitable option not to use these tools. And I work with students in this area in the area of Gen AI and the digital tattoo project I do. And what we're coming across is there's quite a few students there who will not log into these tools. They just say, no, I'm not giving my phone number to Sam Altman for two-factor authentication. I'm not logging into these tools. So how do we create assignments that are equitable and respectful of those students who don't want to share the information? In some cases, students see the same issues we're talking about today and just saying, I don't want to be part of this. So how can we create assignments for those students? How can we create learning for these students in a way that's equitable? Number four, and I think this is probably the most important, is including student voices in the decision-making process that I think all institutions, organizational units are going through now. How can we ensure that the student voice is in the discussion and that we're thinking about tools that genuinely benefit all students and promote equity? And this is something that Valerie Irvin talks about. She's a faculty member at the University of Victoria in education who's worked a lot around educational technology and technology and how we think about the role of students. The second or the third area I wanted to talk about is student privacy and adding to that the rights of research and creators. And again, this is such a substantial issue that I don't think we have easy answers for at all right now. So this is a quote from the head of the Teaching and Learning Center at the University of Edinburgh. There's a lot of university data going somewhere right now and we don't know where it's going. And this is a huge challenge and we're seeing this with proprietary data. We're seeing it with student data. And what we're seeing with these tools is Bo Sung in a second is going to share a table showing some of the terms of services around these tools. But the terms of services are quite weak in most cases. It allows them to do things like use your data, train the tools with the data. So how are we stopping this right now? And I think this is quite a challenge. And again, it's evolving but it's something we haven't really figured out yet. So ambiguous terms of service we've seen. We've already seen data leaks where in chat GPT, some users have been shown the data from other users. And if you kind of go back to the year 2000, it's a long way now. I think of some of those early Yahoo search engine leaks where suddenly all of the search searches we put in were leaked. And people really didn't realize that there are very personal searches that were going on. I would expect the same right now in chat GPT and generative AI and institutional privacy. And finally, right now there is no privacy impact assessment. So we need to be very cautious of how we use it in the classroom, especially if we don't have an equitable way for students to do an assignment that doesn't require it. But it's also worth locating these tools in an evolving nature. I don't think and I don't think a lot of researchers think that this is the final state for these tools. This seems like a very early time. And a few of the trends that I'm seeing coming out that may change areas around privacy, enterprise level tools. So there's already an enterprise level chat GPT out there. There's also an enterprise level being that's been released. And these enterprise tools have more safeguards with privacy. And I expect as institutions think about these tools, we're going to see more focus on enterprise level tools that have been set up with more privacy safeguards for the institution. Building these tools into current software. So again, with the API out there is finding ways of building these tools in a more private way into software we're already running. And then custom applications. Again, tools that make guard some of the privacy in better ways that the more open versions of these tools now. And I just see Faiza Mufti from Education shared a link about a barred leak that just happened. Thanks for sharing that Faiza. So again, I think we're at a stage right now. A stage where there's some particular privacy issues. These will continue, but I expect they will change. So what can we do now? Just a couple actions. And I would be curious when we get to the next part of the session. Some of the actions that you share. Number one is not requiring gen AI as part of course grades without an equitable option for students who don't want to use it. And I think this gets tricky even if we're recommending it as a tutoring tool. What is the equitable option for those students who don't want to use it? And secondly is just being careful for ourselves about what PII we're putting into these tools. Chat GPT4 if you use the advanced data analysis tool allows you to upload spreadsheets and do data analysis. It allows you to write emails with it. So being really careful about not putting names into these tools, not putting institutional data into these tools. And a thought experiment I often do is how would I be if this data leaked out? How would I feel about that if suddenly it was exposed what I was putting in? So I'll do things like put people's names if I'm writing an email. I'll put name one in square bracket. If I'm getting it to do any sort of analysis, I use summaries rather than using the original data. But lots of ways that we can think about some privacy safeguards at this time. Next is IP. And I think this is one of the even stickier problems, although it's so hard to define sticky. There's just so many challenges. Right now there's a lot of lawsuits going on against these tools. So I know Sarah Silverman's involved one of these lawsuits related to her book. I know there's multiple artists involved in these lawsuits. But what we know is that open AI and a lot of these generative tools have scraped a lot of private data. And a couple examples of how we know this. Take your favorite book that you've read, especially a book before 2021, and ask chat GPT to summarize each of the chapters or write a worksheet based on the book. And generally I found that it does this quite well. So clearly it scraped a lot of private books. I was doing a workshop recently with a law professor in it and she writes in very specific areas of Canadian law. She was able to find that chat GPT was answering questions in that very particular area of law that involved her work, even though it was within a paywall journal. So there has been lots of scraping of private data. And this is a prompt. If you have a second now, add this prompt into your chat GPT and see what the results are. And what I found at least when I put it in is that I get it's able to summarize each of these chapters, even though this book isn't open access. And I think one way that we can think about this is to kind of double down on citation. This is from Waterloo. It's a quote about citation. But, you know, as students are using these tools, as we're using these tools, trying to track what they're talking about, I think tracking citations and locating ourselves in scholarship is something that we all do as researchers. So how can we continue to do this and almost double down on it? But it's beyond IP. From an Indigenous perspective, this is a quote from Michael Runningwolf. Michael is a data analyst research and a Cree. And what he says is, from an Indigenous perspective, what's happened is Indigenous data has been scraped and scanned into the system without permission. And then it's been exploited for our intellectual property without acknowledging the creators. So this is not just about IP theft, about copyright. It's about cultural appropriation now. And if you put into, if you ask, for example, for some ideas around indigenizing your class at UBC, it's able to pull from all of the web resources at UBC. But it doesn't acknowledge any of them. And this is just an example. It's interesting. I'm just noticing in the chat, Rachel Wilson mentions when she tried the prompt about the book, chat GPT told me that it couldn't do it, which is absolutely fascinating because for me, it doesn't do that. And that kind of raises that the outputs are going to be different for everyone. And there's often ways around some of the guardrails put into these tools. So how can we, how can we help acknowledge creators? How can we help acknowledge researchers? Again, I think doubling down on citations is important. But also, and emphasizing this with student, looking at tools like Bing AI, Bing AI has some limitations. But one of the values of Bing AI is that it links to scholarly citations. So at least you can start following where it's pulling information from. Another approach might be to have students not only acknowledge sources, but also acknowledge how they're using the tool, almost how they would acknowledge using a tool for research like in vivo. So again, this isn't perfect. And it will be really interesting to know in the next little while, what the results of these lawsuits are. If we removed, for example, plagiarized work or from the chat GPT database, from what I've read, they won't be able to run the service. So it will be really fascinating to see how these tools stand up to these lawsuits. This is an example of the Bing output I mentioned. You can go to Bing AI, log in there. I don't even think you need to log in as, oh yeah, you do, you need to log into Bing. And I asked it to write an article about gen AI using scholarly resources. And you'll see that it's able to link to scholarly resources at times, just like all generative AI output. This is inconsistent. And this inconsistency, I think, can also be a challenge. I also wanted to respond before we end off. There was a question from Annie about the difference between GPT-4 and 3.5. And just to mention, I think it's one of those equity issues right now, is GPT-4 got 90th, isn't the 90th percentile on the standard bar exam? GPT 3.5 is in the 20th percentile. They're almost night and day in terms of the quality of outputs that they bring. In addition, there's plugins and there's a tool in GPT-4 called the advanced data analysis tool. I made an entire PowerPoint with it the other day. It actually appends files. It can create PowerPoints. It can do Excel, take an Excel spreadsheet, do data analysis with it. It's very different than what you're used to with GPT. And finally, there's, I think, 200 plugins for GPT-4. The one that I'm using is called Vox. It allows me to put in any URL from a video. And it will, I can interrogate the video so I don't have to watch it, which is quite a save. I will say in terms of time, just getting, say, what are the key points in this video. Finally, GPT-4, as of yesterday, came out with an image recognition tool. So what you can do is upload an image to it and it will visualize the image and it will write about the image. And to give you two examples of that, I had a picture of my kid's Lego. He did, he's, this is a few years ago, he did a lot of Lego. And I was like, what Lego's in the picture? And it was able to identify all of the minifigures as well as the guardians of the galaxy ship and tell me what it was. So it's, if you have a chance to play with it, it's quite powerful. Just to kind of answer your question there, Annie. So I think we're at the end now. And the last thing I wanted to mention is before we go, is there's an AI site that CTLT has. And please take a look at it when you have a chance, ai.ctlt.ubc.ca. We're also going to be holding an AI symposium on November 14th and 15th. And that symposium is a collaboration between some computer science faculty, a theater and film faculty member, Patrick Pennyfather, who's also in the emerging media lab, and CTLT. We're going to be having a student panel, as well as a faculty round table and a couple hands-on sessions around generative AI. So in the follow-up, I'll probably, I'll follow up with you a little late for this session, probably next week. And I'll share a link to the symposium with you, as well as the recording. So on behalf of all the facilitators, I want to thank you for the wonderful discussion today. And, you know, great points brought up. I wish there were clear answers to these issues now. I think they're making many of us, including what Alex was bringing up, quite uncomfortable right now. And from my perspective, the only way we can kind of work through them is to understand these tools a little bit more and to have these conversations. So thanks very much.