 Welcome everyone. I'm just going to wait for a couple more minutes. We expect about 120 people joining us this afternoon. And I am seeing about 50 right now. So we just wait for another minute. Thank you for being on time. Thank you for feel free to go grab your lunch. This is lunch hour, grab your lunch, grab your drink. Make sure that you feel your seats comfortably. And perhaps we can move the waiting room. Thank you. So good morning everyone. Welcome to our sessions on integrating generative AI and assessment design. And to begin with, I would like to acknowledge that I am facilitating the workshop on campus today and the ancestral traditional and and see the territory of the Musqueam people. On the slide you will see an image. I get this from the BC campus. This is a course design holistic framework. What I really like about this image is the very top on the very top, we have this term, the specific capacity. Every time when I look at my students assignment and something that they submit in the homework, I am learning from them. I don't just learn from them in our everyday interaction or in our classroom interactions. At the end of the course, we're in the middle of the course when I read their assignments, when I read the writing, this is a place where I feel that I'm learning from them. I am a little concerned that working with generative AI, I am not really learning from them anymore. There's someone else is in the play. And so, but at the same time, I just wanted to remind you as we, it's not just assignments, but our teaching. It is a reciprocal relationship that we are establishing with our students. And this is something that I really value. And then a little bit on housekeeping. Let me get out from the invitation mode. Housekeeping here. Please stay muted. We have a lot of participants. And there will be pauses where we will be able to answer some of your questions that you submit on chat. Or we also have a Google document. I'm going to put the link in the, in Google. Thank you. Joe, where you can ask us question. Later on, we will also have a Google document that will include some of the instruction for our activities. And that document will only be available for your viewing. So there's one document for you to ask question where you can add it. And the Google document that is for real only that is for you to follow the instruction for the activities. So my colleague is going to let me see if I can. Yeah, I have this. I included the two link in the chat. So you may want to keep the tabs open for, for the next two hours. And I would like to share. I know it was behind the scene right now. My name is Judy Chan. I'm an education consultant with the Center for Teaching, Learning and Technology. And Lucas Wright. And he is the main Gen AI expert here in our team. He knows a lot about AI. He's the person who put others, the workshops together. I would like to thank Lucas and Nicole and Joe. They also have a lot of experience and also care about how Gen AI will have an impact in our education. So and they will be providing support behind the scene. Thank you, Nicole and Joe. And like I mentioned before, we are, it's going to be a very busy two hours. There will be, we are going to introduce you what Gen AI is generative AI is and a little bit of perception and reality that we know that we've heard from our other colleagues across North America, more across the globe. And then in between this and then we will talk about compatibility and how we can integrate it in teaching. And in between we will have lots of small activities that we designed for a large group of 120 people. So feel free to use the chat and the question sheet to interact with us. So small activities throughout and keep your tap clean because it will get really busy later on. Learning objective. So in the next two hours we would hope that identify some techniques for effective prompting. And we find an output from generative AI. So we feel that we need to know what generative AI is first before we can design an assessment. And then design a learning activities incorporating generative AI while maintaining academic integrity, creativity and critical thinking and rigor. And we are also going to have some practical applications and generative AI in education context. We've talked, we've heard a lot on the news about how to plan a trip or how to cook, but we really need to focus our session today in our education environment. And before we start, I would also like you to think about what is your learning goal? What is something that you would like to achieve by the end of the session? If generative AI is very new to you, then you may even start with creating teaching and learning materials, such as a lesson plan, some learning material, and then for assessment, your assessment and associate group break teaching guideline or other tools that can help support students' learning. So take a moment. Think about what we're going to achieve by the end of the session. We have a very busy schedule, so I'm going to keep moving. Okay. So before we start, I just want to introduce some of the AI tools. We introduced some of them already in our workshop two weeks ago, but here again, chat GPT 3.5 is something that I think most of us have heard of. Registration is required. And what does that mean is that we may not ask our students as a mandatory exercise. We may introduce chat GPT to our students, but we cannot make room before privacy reasons. It should not be a mandatory classroom activities, and it should not be cans to work grading. Chat GPT 4, same thing, registration required because it's actually a paid version of 3.5, much more powerful. Being AI again, registration and edge browser required. Google bar is something that is not available in Canada, but I believe that some students will still have assets to it. Crawl 2, not available in Canada, again, students may have assets. Talk AI and Nama chat. These two, the last two are free tools that we may have access to them. For free, no registration required. So depending on what you may ask your students to do in the classroom, those are two possible options. So I'm going to pass this to Lucas now. Thank you, Lucas. Hey, everyone. So Lucas, right? Thanks so much, Judy. And we'll be kind of jumping back and forth a little bit during the session. I think we've recognized from other sessions some folks who are new. And before we jump into this, just kind of mentioning that acknowledging that there's such a wide variety of understanding and expertise on these tools right now. And I think it's worth kind of trying to get on the same page a little bit during this first section. But acknowledging we're all coming with different experiences. So I want to talk a little bit now about perceptions and the production to thinking about Gen I within the classroom. And just to kind of mention my background, I've supported learning technology for the past 13 years at UBC in the teaching and learning space. And AI is something I've got quite interested in as it got popular in December of last year. So I've been researching, taking courses on and taking workshops in this area. So I don't know about, there are experts at UBC these days. I think I do bring some understanding from having the privilege of being in these spaces now. So what is generative AI? Generative AI, this is a Wikipedia definition, is a type of artificial intelligence that's capable of generating tech. So the big difference here between Gen AI and previous AI is the ability to generate rather than compile and sort. So it can generate text. It can generate images. It can generate molecules. So a wide variety of things that different Gen AI tools can generate. Generative AI models learn the patterns and structures of their input training data. And I'll mention what they're trained on in a little bit, but they're using statistical consistency and word response to kind of predict the next set of words based on the training data they have. And these are really large corpuses of training data. And they use this to generate new data and that data again, it can mean an image. It can mean text with similar characteristics. So another term that it's worth thinking about here is prompting and what is a prompt. And today we're going to be doing quite a bit of prompting together. Because one of the values and one of the ways of really working with these tools and understanding these tools is in good prompting. And I know a number of you came to the promptathon before last week or two weeks ago where we started looking at effective prompting. But a prompt is a seed statement that guides the AI to create contextually relevant output and influences the richness of accuracy. And often you'll see, especially early on, you would see media reports saying, well, chat GPT doesn't do that much. It has generic outputs. I think one of the challenges of these tools is that kind of the garbage in garbage out. A generic prompt creates a generic output. A specific prompt creates a far more specific output and the data that can be pulled from tends to be more specific data. In the promptathon, we mentioned the term prompt engineering, but I think we can even go larger than that. We can think about prompting in general. And while prompt engineering is a perspective to prompting that applies, you know, software thinking and engineering to an extent to prompting, these are natural language models. And I think in an interdisciplinary space like a university, we have this exciting opportunity to think of different disciplinary approaches to prompting. We can think of a prompt engineer, but what about a prompt creative or a prompt artist? What would prompting look like for them? How would a psychologist prompt these tools? Because again, we're using natural language. What about a physicist? How would they use scientific inquiry as part of their prompting? How would a philosopher prompt these tools? So, you know, as you're thinking about prompting, I think a lot of you probably have an intuitive understanding of effective prompting from your work with these tools, but also it's worth thinking about how would my discipline work with natural language to generate the most effective results. And I think it really opens up this idea around prompting. And a couple other areas or issues that is worth raising. And I find myself really torn with generative AI. On one hand, I have some written output challenges and I found this really exciting and really interesting way of improving my own writing, thinking about its use in teaching and learning. And I find this really significant privacy issues in intellectual property issues and equity issues as well as a number of other issues. But I want to focus now on privacy, intellectual property and equity. So first of all, privacy. What we know now is that the terms of service on most of these tools are fairly problematic. So they're allowing us, they're using our training, our data that we put into them for training. We're unsure of exactly how they're going to use this data. There's a lot of ambiguity. They've also in the case of chat, GPT already had user data leagues where they've leaked the input of data to other users. So, you know, ambiguous terms of service leaked user data. It makes one pause and think about all the university data into these tools. And what would it mean for students to put their data into these tools? I think this is a much larger conversation, but a couple kind of high level strategies that we can think about right now as of today. And I'll often say as of today in this conversation. One is disclosure. How much do we disclose to our students the use of these tools say in helping us create assignments? How much do we disclose to our colleagues the use of these tools in our day to day work? And what I'm finding doing these workshops is I'm quite surprised how much people are using these tools in their day to day. Secondly is with students is what our opt out options are we giving them? I'm already working with students who really don't appreciate using these tools. They don't like using them. They're uncomfortable with the privacy. So what we do with students like that if we are using it as part of an assignment? How are we telling them this in the syllabus and on an assessment resource on the CTLT website as well as on the academic integrity hub? We have examples of syllabus statements that we can use and we can think about using. But again, this is evolving. And how are we avoiding inputting personal information? A simple way that I do this is instead of putting someone's name in the tool, I use square brackets and I say name one. And instead of putting an institution, I use square brackets and say institution one. Even if I'm getting it to compose an email. So I think it's worth the reflection now thinking about what you're putting into it and how you're mitigating these privacy issues. The second point here is around intellectual property and copyright. I recently was doing a workshop with someone from law and they mentioned that 100% GPT, they asked it a question about a very specific area of law and GPT was able to answer them. And what surprised them about that was they were the only people who've ever written in this area. So clearly it was scraping their work in an unauthorized way. Another example is from this UNESCO report where the copyright symbol appeared over 200 million times in the data that was scraped to train these tools. So most of the data that's used with chat GPT that's used in Google Bard is developed through web crawlers that have crawled the web and they've scraped the web for data. And what this means is while they've captured things like Wikipedia, they've also captured things that were meant to be private. And what does this mean? How do we think about this ethically when we're using these tools? A faculty member at UBC Okanagan recently was using GPT-3 to generate questions for their class and they put this attribution statement on there. That they used to develop these questions in collaboration with chat GPT-3. It's a large scale language model. They reviewed, edited and revised the language to their own liking and they took responsibility. So again, as this evolves, it's worth thinking about if we're using it, how are we thinking about IP around these tools? And then equity. And I think this is one of the things I'm so interested in prompting, is that right now there's a number of different equity issues, but one equity issue is that some students understand how to prompt these tools much better than other students are. They have developed expertise. They're using different models. So they might be using GPT-4 and paying $20 a month. And they can make these tools sing. So from an equity perspective, what does this mean if we're thinking about academic integrity? Which students are we catching? Because the students who are more obvious are going to be the ones who are less good at prompting. In addition, if we start thinking about as an assignment tool, which students are we favoring by having an assignment? And what prompting do we need to teach other students if we're going to be using it as an assignment? And again, one of the reasons I'm excited about this area is I think it's worth, as faculty members as staff, also improving our prompting so that we really understand these models and we can use them in similar ways. And then we're going to be flow, you know, kind of interleaving this team of students. You know, kind of interleaving this throughout the discussion. But what about academic integrity? How are we thinking about academic integrity? What are some ways that we can develop robust assignments that have less issues with academic integrity than other assignments do? And what assignments may already be resistant to the use of generative AI? When we think about academic integrity, we can also think about AI checkers. And at the very beginning of this, in December and January, when chat GPT first started coming out, there was a lot of talk about AI checkers. There was actually one faculty member at Texas A&M who put all his students work into chat GPT and said, did they use AI to create this? And it said yes. And so he reprimanded all of them and charged them with academic misconduct. And it turned out they didn't. And the checker was wrong. So checkers so far have had a lot of false positives where they catch people in particularly non-native English speakers. They flag their writing more often. In addition, checkers have a tendency they're easy to avoid. So by tweaking, by using other AI models, once you have an AI outfit, by doing some editing, students are able to avoid checkers. So currently, again, as of today, checkers are a problematic way of thinking about identifying student work. So I'm going to hand it over to Judy who's going to do an activity with us now. So thank you very much, Lucas, for introducing Gen AI and counting to us and some of the reality. Now that we've heard about some of the reality of Gen AI, what are some of your biggest fear? We would like to give you a couple minutes to type your response in the chat. And there are over 80 of us here. So take your time, one or two minutes. I am going to type it. And type it. And don't he enter yet until Lucas give us the three to one. Let's talk about the three to one a few times. So type what is the biggest, or what are your biggest fears about generative AI on our teaching, in our course, in our discipline. And maybe once you've typed it into the chat, if you would give us a thumbs up using your reaction, again, before you click enter, so we know that you're ready before we count down. Lucas, maybe we should go count down. Yeah, so why don't we count down and then I'm just going to count, not yet, but I'll count three to one and then get you to click enter. So we can kind of see all of the fears that you've shared pop up at once. So ready, three, two, one, enter. So I'm just going to read a couple of them off to you now, if you would like the potential loss of critical thinking skills that in the past were developed hand in hand with nuance, ability to write in a scholarly manner. Ali Reza mentions errors in assessment of competencies. Susan Curtis writes as students will question why they should bother to learn English at all. So from an English language perspective, students not conducting their own research and writing, dumbing down their opportunity to learning. Biggest fear that students will rely entirely on this tool for all assignments and lose analysis and critical skills. So I'm seeing this idea that, you know, maybe there's a concern about academic integrity, but a lot of this is a concern about students losing their ability to learn or not having an opportunity to learn. Ron writes students using chat GPT to write assignments, but not understanding the answer. This requires the teacher to question the student to validate what they understand and what they have written. And both Sunright students overly rely on it. Kathleen mentions how can we assess students gain of knowledge from our course using reports and papers and not just in-person exams given they can use it for report creation. Andrea notes creating inequality in the classroom for students not using AI. And I would even expand that say inequality, you know, throughout the world, I think as I use chat GPT for, which I pay 20, 30 bucks a month for a Canadian, and that gives me access to more. So if we connect that to intellectual property, a cynic would say that we're paying for use of our own data and only some people are benefiting. So I think it's a good point to take into account the academic integrity that students will not learn fundamental skills. So thanks very much for sharing that. What I'd like to do now, if it's okay, is just pause for a second and see if anyone is interested in sharing about their concerns or about any issues on their mic so you can raise your virtual hand if you want. And just jump in and kind of share a fear, share a challenge, share an issue using the mic. So I'll just give everyone about 30 seconds. Yeah, please go ahead, Anna. I've been really thinking about a like generative AI and the way that it sort of reproduces knowledge that isn't necessarily wrong, but it really doesn't necessarily get to the depth or the nuance that's needed, particularly in the field of nutrition where things are always changing. That's the field I work in. And a lot of the old stuff we have is based on some really, I don't know, crappy science. So it's very interesting seeing what it gives me and knowing that unless you are an expert in the field, knowing that it's not wrong, but it's sort of, so I feel like students not understanding how to like critically analyze science as beyond facts. So that's the main thing. That's such an interesting point that loss of nuance and I'm hearing a lot about that and I'm noticing in myself in the education space. Sharlene. Hi, thanks for this. I'm really worried about a world where disinformation flourishes and generative AI really emphasizes disinformation. I mean, that's very, very scary to me. Yeah, absolutely. It can be such a disinformation machine now in a way. Thanks. Thanks for sharing that. Any other points? All right. Well, let's continue on a little bit. Thanks for sharing that about fears and kind of the purpose of that section was to start getting us all on the same page with generative AI and what are some of the issues really flag those issues and think about where we, you know, how we're thinking about it. And what I want to do now is focus a little bit in on the capabilities of generative AI. And again, this is an area some of you may have played with. I caught that in Anna's answer there that you've been kind of seen the nuance and seen the capabilities, but excuse me, it's worth understanding the capabilities of these tools as we plan for how we can incorporate them, how we can mitigate them in our teaching. So the first capability is textual output. And this is probably something most of you have seen, although I think in good prompting, it can really make a difference in the quality of textual output. So GPT, BARD, CLODE, if you use them, are good at generating textual output. And this can be from transforming text. So for example, I've put in an open textbook and had it generate a glossary for me, a glossary of terms, putting in complex documents and ask it to explain like you were five, or just straight up answers. Like in this case, I said, generate five potential thesis statements for a 200 word essay in first year philosophy course, each one concentrating on the topic of the environment. So you can see by that prompt, I'm getting into fairly specific points and here's the output that came in from it. And what I could do now, if I wanted to, is drill down on any of these and say expand on number one. So if you have experimented with GPT or with CLODE or if BARD, you probably found it relatively accurate. Again, maybe missing the nuance on this textual output. But how good is it? And again, as we think about designing our assignments, it's worth thinking about how good it is. So one way that we can do this is look at how it's done on standardized exams. And this chart is from OpenAI and it shows in blue GPT 3.5 and in green GPT4. So you'll see on, for example, the uniform bar exam, GPT 3.5 was able to get, you know, less than the 20th percentile. GPT4 is in the 90th percentile. So what we're finding now with these tools is they're able to get in standard, many standardized exams. They're getting into the 80th and 90th percentile and this is pretty early generation of these tools. Another example is this article from Bodnik M who's an undergraduate student at Harvard. She did kind of a, not a formal research, but some informal research where she shared an assignment created with Gen AI to respond to assignments in, I think, 10 or seven of her classes and then had instructors grade these assignments. And the resulting grades on the left-hand side, you'll see the course name. In the middle, you'll see the assignment and on the side, you can see the grade. So in this article, she talks about being able to do relatively well at lower level Harvard undergraduate courses using Gen AI. And Gen AI's chat GPT4 has also done quite well in critical reflection of writing. So this is an article by Lee Adol. We'll share all these articles with you after. What they did is they took reflections in pharmacy and they worked quite a bit with prompting and generated critical reflections for an assignment and they had students submit the same assignments and then evaluators go through the assignments and evaluate them. And what they found was that GPT was able to generate higher quality reflections than students were in most cases. In addition, evaluators were not able to tell the difference between AI generated and student generated critical reflection writing. In terms of code and data, this is an area where GPT is improving continually. This is a Python output I had it do, but if you use the code compiler in GPT4, you can get it to execute Python commands and generate most codes. This is an example of an article from Savelika, Bogart, Song and Sacker. And what they did is they looked at GPT3.5 in a first year Python programming course and they found that it was able to get solutions to tasks that weren't in the 90th percentile, but they were able to get 56%, 67% respectively on these assignments. They acknowledged in the paper that students would be able to pass the course if they took, if they use GPT on these assignments. And I want to turn it over now to Raymond Lawrence who is going to share some of the research he's done at the UBC Okanagan around the capabilities and the response to these tools. So Raymond, are you there? Hi everyone. Yes, I'm here. Wonderful. And let me give you slide sharing ability. Just give me a second here. And then so you'll be able to control the slides as well. And then I'll let you jump in and share this fascinating research on the tool one moment. Okay. Thank you. Well, who's doing that? My name is Raymond Lawrence. I'm a professor of computer science and the academic director of the center of teaching and learning on the Okanagan campus. And as a computer scientist and someone involved in CTL, AI is so very interesting. But I'm also a professor and I have to teach these courses and like many of you are trying to figure out what my response will be. So what we did, first of all, is we did some investigation to see how effective chat to continue 3.5 only because that's a free version available to students is at this point. And unlike that previous publication that we saw related to Python, our assessment for our first two computer science courses is that the student that AI was getting basically 100%. All the students had to do was copy the question text, use that as a prompt. And the code that came back was basically perfect. And it makes sense because in first year courses, what happens is you try to be very specific on your questions. So there's no confusion. But that also makes a wonderful AI prompt. One thing especially that we notice is any questions with output, what do you expect the answer to be are especially useful to the AI as prompts because it can match up the output for what is generating. But I don't personally teach first year. My course is in the third year in the area of databases. So I was a little bit more hopeful that the AI would not be as effective in my course. Unfortunately, that was not the case. A specific language called SQL for coding databases. It was getting 95%, right? I still remember the first time the student or research student showed me, I go, there's no way it got that right. But it did. So it's actually shocking at the level it can do. And even more worrisome, there's about a four-week program and project that's typically team-based and is involving hundreds of lines of code. And it was giving about 80% on that. And one thing we noticed is whenever you give samples and say fill in something here, the AI is really good as filling it in because you've basically given it a prompt and say, just give me some help in this piece. That's very useful for students when they're learning, but it's also a very effective AI prompt. So the only thing that we saw that had some resistance is things with images or graphic output. At the very least, you forced the student to retype the information rather than copy and paste in the prompt. And what I'm showing you on the image is something that was shocking to me. It was a very hard question involved in design where you hadn't put a diagram. It actually drew a diagram and asking text. And it wasn't right, but it was not too far off either. So I was expecting that to be resistant, and it wasn't as much. So with that idea, we go, well, can we detect to see if this has happened or not? So we tested some of the similarity-based detection tools for code, kind of similar to turn it in except for code. And the fact is that if they didn't work, they were flagging AI assignments as more similar to human assignments than other AI assignments. So in generating four or five AI assessments, they look different to those detectors. Even more discouraging, we had multiple instructors and TAs evaluate human submissions and AI submissions randomly in an experiment. And we really couldn't tell them apart. Even myself, I was getting a 10% false positive rate where I'd be accusing people of academic dishonesty when they really were not. And the heuristics we use would be easily beatable by students once they learn. The things that we were effective detecting is actually solutions that weren't as good. AI solutions were more professional than the student solutions, and that's normally how you could tell them apart, unfortunately. So I just mentioned the results that we have here, and this is going to be put out in a research paper in a little bit. So given that, the question is, what we can do as a response? So my thinking from my course is the very first class, I'm actually going to demonstrate the AI doing question number one. Basically, to acknowledge that I know it's there, I know you can use it. But at the same time, I'm going to say don't use it, but I'm going to say it's there, just so they know where they are aware that I'm aware. And then the same argument I made before with other types of cheating. I'm going to motivate, well, if the AI can do the work, why would someone hire you to do that work? So you have to build those skills, always emphasizing the skill mastery. So in a sense, this isn't indifferent that we have AI. It's just we're arguing once again why the students should do the work. I'm going to look at waiting the assignments a little bit differently. And I might offer bonus marks for students to come in to office hours to explain. So you have that opportunity to verify if they did the work, or at least they understood what the work was. I'm still going to give an opportunity to use the tools to solve practice problems because I know in the industry they're going to have to do it. And then the question I'm still struggling with that hopefully as we talk more about throughout this particular workshop is this exam used to be done online on Canvas. The AI was getting about 95% if you just took the questions and prompted them in. So there it is definitely an issue with exams that are computer based. But I'm happy to answer questions later. And I look forward to interacting with the rest of this workshop. Thank you. Wonderful. Thanks. Why don't we just take a couple of questions now if folks have them, questions or comments about Raymond's work. So I'll just again let you, you can either enter them in the chat or you can raise your virtual hand and just feel free jumping with your questions, comments, concerns. And I'll just give folks a minute if they want to respond. Otherwise you're welcome to ask questions towards the end as well. So a question from Hannah, just curious how universities are going to be approaching possible ungrading in the wake of these tools. I really think we should be working towards master assessment of skills where it's appropriate. And then it's just the question of how we assess it. I know it's hard to scale in-person assessments sometimes like interview style, but that's the most authentic way. And so I think we're going to have to think about what types of assessments we can do that have better, more authentic in engaging the student directly. And then I do have a risk with computer-based exams, especially ones that we've evaluated on Zoom. I'm talking with my CTL hat here. I think those were problematic to begin with. Now even more so is what I feel like as an instructor. Wonderful. And Ron Wasik mentions this suggested exams should be in-class versus online or take-home exams. Yeah, the same thing. Online or take-home exams, students, you still, you never had a full guarantee that they were completing their work themselves. They always could have outsourced it. Now it's just much more convenient to outsource it with the error generated on it. So it is something we definitely think about. Wonderful. And then Dorothy mentioned, Dorothy mentions, how do we define, explain, explore 100-level learning in the context of this? Dorothy, do you mind clarifying what you, what's meant by 100-level learning? You're able to jump on the mic? Sure. Yeah. So, yeah, thanks so much, Ramon, for going, giving us this very hands-on example here. I think I'm kind of approaching this from like a moral for humanity's perspective where, you know, just the, what other people mentioned earlier in the fierce section, like just the, basically the basic tools of how to write an essay, how to, you know, do basic research things. Like, and you've been talking about mastery, but I think those are two very different things, mastery and these like very basic skills that we usually teach them. I have a little bit of a hard time wrapping my head around that, like the basic stuff. I totally agree. And here's the problem. In order to critically evaluate the AI and do these higher-level skills, they need the basic-level skills. But the AI allows them to skip those basic-level skills and do them for them. So the question seems to be, how do we motivate them to still do those basic-level skills in whatever context you have? So in our context, you can't build big software unless you do the first-year programming. The same thing is, you can't write interested articles unless you know how to write in general. So I think it becomes more of motivation. You can't rely on detection. So we have to somehow get that intrinsic motivation for students to learn to do it. Great. And just a couple other comments, and then we can move on. Cynthia mentions exams on Canvas can be set up using the lockdown browser. Students can write the exam in the classroom and that will give us some ability to lock them down from using it. Thanks for sharing that, and that is worth thinking about as we're developing our exams. And then Maryam mentions back to effective prompting, if the questions are designed without required context, chat GP is not able to provide the correct answer. I truly examine in my master-level course designing an exam in such a way that the context in class were discussed would be helpful and absolutely. I think what we're going to look at, Judy's going to look at with us in a few minutes is the limit of kind of the edges of these tools right now. And one of the edges is specific classroom context, what hasn't been scraped and how we might be able to leverage this. So thanks very much. Let's continue on. And I want to do the first activity with you. Raymond's going to be here the entire time. I'm sure he'll be okay if you direct text him or bring up your questions as we go through. Thanks so much. That was a really concrete look at some of the thinking around this. So what I want to do now is to think about analyzing one of our own assignments. So I'm going to assume that everyone in the room is either teaching or they're in a support capacity or maybe they're studying. And what I'd like to do is give you a chance. I'll give you about five minutes. So this isn't a huge activity, but five minutes to start putting one of your assignments into a tool like chat GPT. And I know that Judy mentioned talk AI. Perhaps Joe, you could share that in the chat if you're not able to get into chat GPT or you're uncomfortable signing up for it. But Derek Brough came up with this suggested framework. Derek Brough is the former head of CTLT at Vanderbilt University, and he was a mathematics faculty member. So he came up with these questions to ask ourselves as we create assignments. So how might AI undercut the goals of this assignment? How might AI enhance this assignment? So a focus on process, how could you make the assignment meaningful for students or support them more in their work? Why does this assignment make sense for the course? So thinking about our learning objectives, what are specific learning objectives for this assignment and how might students use AI tools working on this assignment? He called all of this kicking the tires and he kind of kicked the tires on a couple of his assignments. So I want to give us about five minutes to kick the tires on one of our assignments. And for this activity, what I'd like you to do is find an assignment that you are currently offering or supporting and run it through a Gen AI tool with a couple different prompts to improve the response. Using, you can use your knowledge of Gen AI to improve the prompts. I'm also going to give you a couple of approaches to prompting that might improve this. Finally, on the activity worksheet that I handed out, I created a couple of prompt templates for you that you can copy and paste in. So again, I'd like you to run your assignment, hopefully through chat GPT, try the prompting out a little bit, maybe try one of these approaches that I'm going to show you in a minute, or the templates. And after five minutes, I'm just going to let you ask you what you found in the chat, or you can jump on your mic. So let me demo a couple different approaches to prompting now. And I'm going to do this using chat GPT-4. So this is chat GPT-4, and I'm just going to copy-paste some of these prompts right here just to make a little bit easier, kind of like one of those cooking shows where they've already cooked the food. So the first prompt I'm doing is an example in a political philosophy course. And I use political philosophy because I took it a long time ago, so it's a course I'm able to speak to a little bit. And what I think one of the challenges is it really takes an expert set of eyes to understand what is right and what is wrong with the different responses. So let me just create a new chat and I'm going to paste that in there. And I'm using a prompt approach here called an outline expander. So what an outline expander does is it creates an outline for something and what you can do is expand it as you go. And I find this really powerful when you're thinking about developing a piece of work. So I'm just going to use the prompt. You're an outline expander. When I prompt you, provide a bullet point outline and then ask me which point I would like you to expand. And I kind of do this all the way down. So here we go. So you're an outline expander and it should say no problem, but you'll see it actually started. So I'm just going to tell you, I told you please wait for the prompt. Something that I find interesting about these tools is you never quite know what the output is going to be. Okay. So I apologize, waiting for the prompt now. And what I'm going to do now is paste in my main prompt. Just give me a moment. So I'm going to say, compare and contrast views on the state of nature described by Loc Russo and Hobbes. And it's going to give me the bullet point. So it's giving me John Loc's view. Thomas Hobbes view now. And it's going to start giving me comparisons and what should happen when I'm using this prompt approach is it's going to ask me which point to expand. So I'm going to expand on number six. And now it's going to go down another level. And again, when we're thinking about good prompting, this is going to allow allow students to go a little bit deeper into a particular essay. And I'm kind of demonstrating this to think about kind of that garbage in garbage out thing, the deeper we go with the prompts, the more information. Great. So now it's doing a comparison. I'm going to write a paragraph on the role of government. And in a moment, I'm just going to get it to write a paragraph on the role of government. Write a paragraph. On the role of government. So it's going to give me a paragraph on the role of government as understood by Russo. So I've kind of expanded my outline, figured out what I want. And now I'm going to write it and get it to write the paragraph. So I'm going to write a paragraph on the role of government as understood by Russo. And that's called reflect on refinement. So I'm going to say, and this is where prompts can get a lot better. Act as a professor of political science and evaluate and critique this essay. Tell me the criteria that you use to do that. So now I'm going to get it to evaluate the criteria. It's going to give me some evaluation that it did. And when it does all of this. So I'm going to go to the paper. And we're going to rewrite it based on this evaluation. So the thesis could be sharpened. Further exploration could deepen the understanding. There could be more evidence and support. There could be some rephrasing. And once I'm going to, once it's done all that. I'm going to say based on this, on this evaluation. So I'm going to rewrite the paragraph. And in that example, I showed you a critical reflection. This is the approach they use to make sure the critical reflection was at a higher level. So now it's rewriting the whole thing based on that. So what I found and what they're finding in research is by doing this, you're getting a much higher quality result. So I'm going to stop sharing now. I'm going to give you five minutes to run one of your essays or one of your assignments through chat GPT. If you've already been playing with it, give it another try, try some other prompts, try that outline expander. I have a couple of templates for you. I'm going to time out five minutes and after five minutes, I'm going to ask folks to share what their discipline is and what their experience was running the assignment through. And hopefully it will give you a chance to try out these different types of prompting as well. All right. So why don't we share back now? And to share back again, acknowledging this is just a quick assignment or activity just to get a feeling for this. So would you like to share in the chat or unmute yourself and either share what your experience was running through the assignment today or what has your experience been running through assignments through these tools so far? So I'll just again, let's take a look in the chat or feel free to put up your virtual hand and jump onto the mic and share either your experience today or previously when you ran assignments through. It's interesting with the thumbnails. I don't know whether everyone's just still in chat GPT or if they're waiting to share, if they're just waiting for us to jump into the next section. Hannah, please go ahead. I mean, I'm kind of just shocked. I actually hadn't used this for any assignment props yet. I'm developing for September. That's pretty much all I have to say. It's like there's a student in my class who just completed a fairly coherent assignment on a very specific classroom based discussion. So yeah. And now I'm more scared. What's your, what's your discipline, Hannah? I'm in the school of information and I used an assignment from info 200 part of our minor in foundations of information studies. So we will be talking about chat GPT in class and a lot, but yeah, I'm still wrapping my head around this. Thanks for sharing and Ilya Parkinson's mentions. This was the first time I've tried this. It actually made me feel physically queasy because it's as if everything I've been relying on has become untenable. The short essay it generated was remarkable synthesis. Previously I was not sophisticated with prompting says Jenna still quite junior in this, but the guided template for prompts you output is richer. And again, I think this brings us to the equity thing as well as everyone's not prompting in the same way. Yeah. Yeah. So, yeah. So, yeah, our Brie writes, I was exploring a reflection style question answered very broadly and with, I'm assuming sarcastic quotes around my practice, but I can see what more prompting follow up. It would get something more suitable. Sorry. Submitable. Dorothy says it's really good in essays, summaries and comparisons partially lit reviews. The breadth and the butters of humanity assignments are really good. Yeah. So, yeah, I think that's a perfect day plus paper for my assignment. Carol writes I tried the follow the same format, even the first Ryan round chat GPT writes so much more clearly than almost all of my undergraduate students, but it's output is still quite vague. And that's interesting that kind of line with the vague output. And I think some Anna mentioned that before is kind of lacking some nuance with its writing. And I think it's a great way to get into the context of essays. And what I've seen here is that by using prompts, you suggest it's been delving very deep into a topic that I really don't know about, but just by asking that I feel like I'm reading a presentation. I ask it to do a PA citations, but it did not. What does that mean? You might want to try again, Jordan. It should be able to do those. The output depends. Sometimes it's going to answer one thing's other time it won't. And Antonio mentions GPT is not very good at course specific content. It can understand broad stroke information, but not good at integrating details that I provided for specific for the course molecular cloning lab biochemistry and Judy's going to talk about limitations now, but thanks so much for sharing all of this. I think it's a useful starting point just to see what we're seeing how students may be thinking about our assignments using this tool. Yes, as we know as we already discussed on the shared, there are some limitations that are known to us. Educator for now. Something that we know quite early on is that the training data is quite limited. Chachi to PT only has the information that until 2021, I believe, and there's a date to cut off. If we ask them for the latest information, like Anna shared earlier about nutrition information that we developed in the last two years, that's not, it's not going to be able to help us or help our students. Some of the knowledge that we have that is on our course websites on canvas. Some of the specificity would not be scripted. They will not have that information. And accuracy. And so not all information is accurate. That's been spit out or generated by AI. But I also find that it's quite amazing to look at how confident AI speak to the wrong information. It's something that I can catch in my students' writing. Like in the past, I know that when there's something that's well known, my students will be able to write a coherent essay. But on the parts that they are not sure about, you can tell the writing is not as confident, but AI's output is very confident. They can talk about wrong information, the misinformation with that sense of competencies. Also citation, as we mentioned in the chat, referencing citations, finding the exact article and making citations is still a limitation in many of the AI tools. And yes, it can get smarter. I can see it in the chat expertise. AI also doesn't have the expertise that we have about the course that we teach, about the course, how it fits into the curriculum. So if we teach, in my teaching, I teach an introductory course and in my specific discipline, I talk in a tone that is only specific in that first year, first introductory course. So it doesn't, the language can still be very general, but we are, I asked my students to do something very specific into the expertise. So there's the limitation of AI. In general, it looked really good, but once we get into the course content with the specificity, it's limited. Other things that we also observe is that, I'm going to jump to a slide, sorry, a slide earlier, later on. Something that we also detect are, the output that it came out is quite uniform. It's good, but when you have 30 students, 50 students, 120 students all writing in the same style, you can feel that something is not working. The level of uniformity is something that we've never seen before, at least I have never seen before. And again, the coherence is, the style of the writing is very coherent. It's missing the originality. Every writing is the same. Again, this is something that we've detected in the, that's been reported in this article, that we are missing the originality from students. And of course it also can take errors, errors that we may actually point to only in our classroom. We told students that a common mistake is this A.I. picks it up and talking that very confident tone. Again, A.I. doesn't have the context in our class, that we talk about in our class. The paper also acknowledged that it's, this paper was published last week, and also acknowledged that it's going to become more and more difficult for us to detect A.I. generated questions, just like what Anna also mentioned in the chat. A.I. will get smarter as more and more students as we continue to contribute to it. It's actually using all the prompts, all the questions that we ask, it may actually get better and making it harder and harder for us to detect. Let's go back to my other slide. So some things that can help us design more resistant assignments. Our classroom assessment technique is something that we do, the formative assessment that we do on an ongoing basis in the classroom in this synchronous environment where we have discussion with students. Problem-based learning, team-based learning, a lot of the collaborative-based learning where we ask students to talk to each other or to communicate with each other, to learn from each other. Those are also assessment techniques that could be more resistant because now we ask our students to interact with another human being. Again, formative assessment that we happen in the classroom are also because formative assessment usually have a lower, it's lower state, it counts for less rate and the temptation for students to cheat or to use AI to help them is lower. So if we have more of the smaller assignments through other course, it will just make it less attractive for students to use AI and we are inviting them to contribute their very authentic response. So formative assessment will be a way to help us mitigate some of the challenge we are seeing. Other things such as student presentation in-person invigilating in the classroom, more in the classroom assignment. But of course, can we do that now? I would also like to acknowledge that we spent the last three years trying to put everything online. We just came back to a more normal year with a little bit more in-person setting. But now we have to convert it back to in-person classroom in-person assignment and in-person exam. That sort of erase all the work that we did in the last few years too. Other ideas that we talk about the limit, we talk about assistant, that's thinking about what other ways we can leverage the limitation. Maybe we can tell students we need to set up some guidelines. What can be used? We can define the tools. We can explain to them the privacy considerations. Something that I just learned in the last hour from your contribution is we can explain to them why we are here to learn. If we are going to write all the essays, are we agreeing to submit all the assignment with the help of AI? What's the purpose of our students coming to university? So we need to explain that to our students. Citation, asking for more citation. Not just the citation style, but really asking students to include citation and references. And at the same time, we also need to explain to them what is not acceptable. Providing unique example and concept in the assignment, making it beyond the training data. I asked them to look for information in the last two years. But again, I'm feeling that it's going to be harder now. Many of the AI tools, chat GPT-4, bang AI, is able to get information from more recent data. Again, in-class high-stake assignment. Having more of our TAs too. So the way I talk to my TA is that now that the writing is pretty easy to read, they are, and I have to say that the assignment that we receive, the writing is at a very good level of writing. It's really easy to read. So my TAs, my job is more to chat the reference and the citation. And it's also to teach our students what is scholarly writing. Why is university writing different from regular writing? Because it's not very scientific because we cite and acknowledge other people's work. So our teaching needs to shift a little different. So that will be something that we can leverage the limitation and help us teach more. Better, not just more, but better teaching. So after some ideas, I am seeing the chat is so busy that I haven't been able to follow. Sorry, I will spend the next five minutes when you're busy with the next activity. So in the chat, we just put in a link to a palette. The palette has already been populated with some ideas on how you may want to spend the next two weeks or three weeks to prepare for your talk. We would like you to use the like button, the heart shape, to tell us something that you may want to consider that, something that you would actually try. So some of the ideas that may be applicable in your context. We also welcome you to add your own idea. So maybe about four or five minutes for the palette activity. So let's go on to the palette. And thank you for sharing the palette screen. So we pre-populated the palette with some ideas of what you may want to try. It may not be applicable in your own context. But let us know by using the heart button. What is something that you may try. Or if you have something new to share, add it. Use the plus side. Thank you. Yes. At the bottom of the right screen. On the right screen. Use the plus sign. And you can add some of your idea. So I see letting students draw mind maps instead of small written assignments just came up as an idea. Thank you. So I'm reading a question in the chat. If we encourage an AI option in an assignment, can that assignment still come towards great? I really, it depends on what is the AI tools and what AI options that you are asking your students to do. So I would not be able to answer that questions. And that's, I see your whole assignment. And a couple others that have just come up. So emphasizing motivation in student communications like Raymond mentioned. If you use an ungrading approach to realize heavily on self assessment and peer assessment, not only is responsible and AI, but also to focus more on formative than summative. Request diagram outputs from students. And tasks just shared the AI policy in her syllabus, which is great. And pass me maybe can we even get your permissions that we can share it on our web page? Lucas, should we move on? Why don't we move on? That's all right. So thanks for sharing all of that. And I think this is kind of a bit of a turn in the workshop. And what I mean by turn is this is challenging for all of us thinking about how it might have a role in our classroom. And I mean, I will add to that. This isn't an entirely new problem. We've been dealing with things like Che course hero, independent tutors for a long time. But this is more widespread. And so it's working with that. But on the other hand, a lot of us are using this tools in our day to day. And so for the next part of the workshop, what we want to think about is what it might mean to integrate it into our assignment. And how we might leverage these tools to enhance our teaching and to make certain things that we do in teaching easier. So the first thing I wanted to mention is developing learning materials and kind of a caveat I mentioned here already was the intellectual property challenges around doing this. While AI can be used to develop learning materials, we would need to be really careful with publishing these how they're used etc acknowledging, we don't know really where these were scraped from with that said I think there's some interesting ways we can use a gen AI to develop these materials. So the first way I wanted to mention is questions and problems. AI can be really good or gen AI at developing short answer questions based on learning objectives. So in this prompt here I mentioned, develop 100 short answer questions based on the following LO's. And again, alignment is one of the strengths that I've noticed with gen AI can align very well to learning objectives. And it's also the ability to develop things like questions at scale, rather than writing these outs ourselves. Examples and case studies. So GPT 3.5 is pretty good again at writing case studies. So in this case I asked it to act as a social psychology professor which is something called a persona pattern in British Columbia so I was very specific developed 10 concise case studies that exemplify various social psychological concepts. Excuse me for my generic case study generation here. I'm not a disciplinary expert, but even with this fairly generic prompt, able to get fairly effective case studies and I use this recently in an instructional skills workshop I did last week, where I needed to do some studies or scenarios for teaching an active learning activity. So I was able to generate 10 teaching scenarios for me, which I was able to reuse in this class. And even Malik talk about the ability for it to develop examples based on concepts. So in this case they actually take a research concept, and they ask it to develop as many varied examples of the concept as they can. And they get students to develop these same examples for them to get an understanding of diverse examples around concepts. So our teaching kind of the movement from concrete to abstract can be quite useful in learning and developing these examples can be something that can be challenging for ourselves as teachers. Course design. So, as somebody who works in the area of course and lesson design. GPT, GPT for has been pretty good at doing some lesson design for me. And this is an example of a prompt that I use act as a faculty member and political philosophy and develop a backward design map, using finks approach so if you're not familiar with course design, D think came up with the idea of backward design. And it is very good at calling on the work of think and developing these backward design tables. I added some specificity to it the course has been taught at a research university in Canada, make sure that the objectives are measurable and align with bloom, and the assessments are authentic and learning assessment. So, this wouldn't design an entire course or an entire lesson for me, but it can give you a lot of ideas to pull from. So creating these design tables creating all those and aligning these tools. Another example is this is a little bit more kind of bleeding edge. This is a from a blog post by David Wiley, and what he talks about is kind of thinking into the future of AI, and thinking about how it may replace some of the work that we do in textbooks. So rather than a textbook in his courses, he's getting students to use a set of prompts as a dynamic textbook. So sending them home with these prompts, asking them to prompt the tool to understand sleep in this case, and then using these prompts as a way to study with. I think this is still in its early stages and we do have to deal with things like accuracy hallucinations, and you know some of the complexity it may miss, but it's an interesting thought experiment to think about what it may be like as we go further with these tools. And what I'm going to do now is I'm just going to move past there. Give me one moment I'm going to jump through a couple slides pardon my scrolling, and I'm going to turn to Alyssa Bonacet, who's the director of CTLT to give an example that she's been looking at for creating questions using gen AI. Over to you Alyssa. Hi, everyone. It's lovely to see everyone here I'm super enthusiastic about the energy in this room and also, it's been incredible hearing all the concerns that people have and all the ideas about opportunities that this presents so yeah just to introduce myself. I'm Alyssa Bonnie Assad, I am a professor of teaching in the computer science department my area is software engineering so not AI. But I'm going to start trying to claim that I'm AI adjacent just because I sit in an office close to some AI researchers. And, but what I've been doing is I've been looking into how gender AI can feed into my teaching process, basically so just trying to explore ways in which I can save myself time I think when all of us hear about the fact that it's, you know, going to be more difficult to assess students authentically. The immediate next question is, Okay, well, I only have so much time I only have so many TAs, we can only grade deeply or in person or in an invigilating setting for so long. And so how can I save myself time so one of the things that I was looking into was ways of saving myself time creating assessments. I was saving myself time in terms of having students interact with the tool. So that office hours might be lighter so maybe I could take some of the burden off of the TAs and myself in terms of answering basic questions and load that over onto chat GPT or other tools so somebody in the, in the chat asked, or I think it was some no you said, you know, what do I do in terms of these hundred level courses and, and how will I sort of move the conversation about the types of things that I'm teaching. And I think one of the interesting opportunities is that chat GPT can teach lower level stuff pretty well. What it can't do is bring the nuance of reality to the conversation. And so that's where we still come in. So we still have jobs and still important for us to, to actually bring reality as we'll see in a little bit also chat GPT kind of answers the way things should be or the way things have been. It doesn't. It has either a romantic view or very biased view of reality and so when you ask it. Once I asked whether NSERC funds discipline based education research and chat GPT very confidently said yes that definitely does which it definitely does not. But it should. So much that GPT knows how the world should work. It doesn't really necessarily know always how the world does work. But it does pretty well in terms of computer science as Raymond attested to earlier in his trial so thanks Raymond for outlining that. What I was thinking about was how you know I spend a lot of time making assessments in my class so you know quizzes I sort of we have a whole load of quizzes. We give in some classes we give mastery quizzes and we have hundreds of them and we make them all by hand and so this takes up, you know, 10s and 10s and 10s of hours. But instead was okay well could chat GPT actually make me a quiz question. So I asked it to do that and this is actually a sort of simplified version of the question just for the understandability of all of all of you, since you're not in software engineering. So I said asked I said to chat GPT asked me a question that will test my knowledge of the difference between the single responsibility principle which is a software design principle, and the interface segregation principle which is another software that students often confuse and ask it at a level that doesn't require coding knowledge because I'm talking to all of you and not talking to students in my class. And so what it did was it constructed a question that had a bunch of a bunch more options as than this but they didn't sit nicely on the slide. That is a pretty viable question so imagine you have a toolbox it used a nice real world example. Imagine asking this question in a really introductory software engineering class that we've actually never had before because usually software engineering concepts like this come in around year three. But you know looking at this it seems like oh yeah there is some accessible way to ask these questions that would give people ideas about these, these concepts. But in any case this is a pretty viable question Raymond you can check this out you're another computer science prof what do you what do you think does this look like an okay question. I don't know that Raymond's actually. It's reasonable to me I don't know. Right. You still have to take my class right now. But yeah it's a pretty reasonable question but then I thought, you know the other thing that I spend 10s and 10s and hundreds of hours doing is making practice problems. Practice problems in computer science are probably similar to practice problems in other areas because they have to be almost even better than real quiz questions. Because you have to give, they have to be pretty bulletproof when you're giving a practice problem you have to know exactly why it's right or, you know, what the options that are right are what the options that are wrong are. And if your reasoning is incorrect, then you will have a Piazza thread and then or a forum thread and maybe even a Reddit thread about how terrible your practice problems are which I have experienced in my life. So not wanting to have a whole bunch of Reddit stuff I thought well I wonder if I put this exact same prompt back into chat GPT in another session. Will I get the same answer exactly again, and you do not you get a totally different answer a totally different question. So I was like, oh, how I could actually use the prompt to generate my quiz question, and then just give the same prompt to the students and say, Okay, here's how I generated your quizzes. You can have the exact source material to generate your own quizzes and you can practice over and over and over again. And the super cool thing about this is that first of all on computer science it's knowledge is really good so I can actually trust it on things like this this is heavily wiki like we even trust wiki PDF or this stuff so. So it's very very trustworthy information that it's putting back. But it can even answer, it can even interact by saying like if I said, a was correct and that, and it said no a wasn't correct I can't remember what just the right answer here. But if I, if I said, if I thought a was correct. What would I be getting wrong, what where would my understanding be falling down and chat GPT will try and answer it'll sort of say, Well, it might be that you have this misconception about this or it might be that you have that misconception about that and reading through those responses from chat GPT I was like that is exactly said to a student grappling with this problem. I've kind of explored this a little bit, and this is a I'm kind of burying the lead here because we're about to share this this resource generally but I just wanted to quickly share a link to a little post that I made about this experience. I have not tried this in class myself yet because this semester has not yet started, but we will be trying to deploy a quiz that's generated like this in our in our in our self engineering class that's coming up this fall. Now I will say that you have to be careful about plagiarism here because as Lucas and Judy mentioned these questions might actually be drawn straight off one of these cheating websites that is actually pulling copyrighted material from another instructor. If you're publishing these or doing anything like that then you should probably be really careful. My preferred approach will be to actually reword the real the real quiz question so that I'm not directly using materials from another instructor without permission. This is just the accessibility issue around chat GPT it has not passed the privacy impact assessment. So I can't require it, or rely on it completely for tutoring my students. So I can say, Okay, if you want to jump the queue for office hours you can interact with chat GPT this way. Otherwise, maybe I could pre generate some of these and have some, you know, chat links so that students could follow a chat that I had done. I showed you know the full conversation with with with me being a student who was, who was going through a practice quiz. So there are some ways to make chat GPT content accessible to people to make sure that you're, you're covering off those privacy and accessibility issues. Okay, so thanks Lucas. This was great. Thank you. Let's just kind of jump back now into learner support which is sorry. Learning material creation and I was going to do an activity but we don't have too much time on this. What I'd like you to do now is. We talked about generating examples we saw talked about a using it for discussion questions. And we talked about the potential of using it for a course design table. And I think a list of Lisa shared an interesting example where she's kind of combining learning material development and learning support. I love sharing for a moment and I just love to hear from the group, either in the chat or on your mic. What sort of learning materials or teaching materials have you been thinking of developing, or could you develop using this tool, or using these tools. So again, you can just drop it in the chat or raise your virtual hand and jump in and kind of share on the mic. What you've been thinking about. So are they case studies or they problem sets. What might you be able to create and share in your discipline. So Vika says in class learning activities. Thank you. And and because would you be comfortable jumping on the mic and kind of explaining what your discipline is and what sort of activities you're thinking about. Hi, sure Lucas. I am going to be teaching in the program sustainability, a second year course so it's an interdisciplinary program here at UBC Okanagan, and the context is natural resource governance and management. There are many opportunities for in class discussions and group activities and something like chat GBT can actually I'm testing you know if it can, it is able to generate activities if I provide enough context and types of topics, and I can, I can ask it to generate an activity that takes 30 minutes, 20 minutes, 15 minutes, and it can develop a plan which can guide me also to roll it out in the class with or without a print out but it is a good guidance for me to keeping in mind the constraints of a classroom and timing constraints. Wonderful thanks for sharing that and what would be so neat with interdisciplinary I think I put in that activity sheet the, the, the reflect on reflection, and the way that the example I gave you with prompting it would be interesting to make a learning activity and then say, act as a philosopher and analyze this learning activity, how would you change it and kind of act as different disciplines to refine that and make it more interdisciplinary. Thanks for sharing that example. Mara mentions case studies tests as I might ask for some in class activities to, but also some basic definitions that are used a lot in my field, and at least in my experience it's been excellent for definitions in the class. Let's just see if there's anyone else otherwise we're going to jump in and talk. Oh yeah, please go ahead, Ron. I'll have to meet myself. In the past. I have always used published cases from the University of Western Ontario. And what I'm concerned about is, for example, if I in talking about marketing, which is a module that I use in the course in the past, I've always relied on acquiring cases from the University of Western Ontario. But if I went on chat GTP, you mentioned that there's really no authentication as to where the material comes from. So you really don't know whether you're violating trademarks or copyright or anything. So I take it that this is something that should not be used. Yeah, I mean it's something that needs to be used with a lot of caution. Absolutely. And that's a huge challenge. Ron, that's a really good point. And I think it's, it's in an interesting area with copyright right now I would hope that that change is a little bit when you use something like Bing AI, you can trace the references a little bit more. And then you may be able to kind of use the reference checking to figure out where it's coming from. I mentioned, I strongly recommend to my students to USA to create their final reports paper related to practical cyber security project projects that have done to avoid a lot of grammatical mistakes. So thanks for sharing that. What I'd like to do now is talk a little bit. Oh, excuse me, about learner support. And one thing that I think is really interesting is what a powerful learning tool that students are finding these tools now and thinking about how we might leverage that to support the learning process. And I'll give you a couple examples. I'm not going to actually demonstrate using the tool but this one is called the flipped interaction model. And I work on another project called the Chapman Learning Commons where we help students think about how they learn and strategies for learning. And in my experience so far chat GPT has been a really good tool for this. So here's my prompt act as a learning tool, ask me questions to understand the challenges I'm having doing well in biology 100. Once you've understood my challenge, right out of plan to help me improve my learning and start with the first question. And by giving it that prompt sometimes you have to kind of force it to go a question at a time. So what it's going to do is ask the students question by question to kind of understand what their learning issue is. And the output that I found so far as it asked me things like, are you studying one, you know, chunking the concepts. Testing yourself on the material as you go, really good metacognitive tips, and then afterwards printing out this guide so I think as a tutor, and there's already tutoring programs developing eCampus Ontario has developed one recently, is thinking of it what it could mean as a student tutor. Secondly is as a practice tool. So this is an example of using I've again I put an example in the activities using a game pattern and if you haven't tried GPT we did this in the promptathon. It can be very good at creating games to help students practice concept. So in this case I said I want you to play a game with me to teach me how to identify logical fallacies and it says of course I'd be happy to play with you. It gives me the scenario has me guess the logical fallacy, and then tells me the answer. So you can go back and forth on this and play these games with it in most areas. A fun game if you have a chance is play a prompting game with it. Tell it to create a game for you to improve your prompting for chat GPT one question at a time. Just going to jump in there we already did that one. And the last example that I'm not going to demo now but I wanted to mention is writing analysis and feedback. So I think I mentioned early on in this lesson that I've had some challenges with written output in my life, and I found that it's been exceptional at helping me improve my grammar, as well as my written output so to do that. One of the ways that I've been doing it is putting in a piece of my writing and asking it to rewrite it but not only rewrite it to rewrite it and then create a table underneath showing me the before the after what changes it made and sentence by sentence and it does these tables beautifully. And then asking it based on this create a worksheet for me so I can practice the errors that I made. So I think when we're thinking about this as faculty. It's worth thinking about teaching and learning and what sort of learning students are doing, but what sort of learning you can guide students to do with these tools. So the last section that we wanted to talk to you about is thinking about incorporating it in course assignments. And why might we want to do this we can acknowledge that students are using it, and it's becoming a skill now that is going into the workplace it's in students daily learning lives. So how can we think about the space it might have in our course assignments. I looked around a little bit for frameworks online for how to bring it into the course assignment. I am finding this is really emergent so rather than a framework. I just wanted to share a couple elements that we can think of that kind of describe how it's being used in course assignments now and because I like acronyms I'm a paddleboard instructor and we're always using acronyms for things. I use the acronym for this or so thinking about often assignments have a focus on getting students to create an output of some sort so either giving them specific prompts, telling them about how to prompt, or just using open prompts. They often have an element of analysis, and this is where we can use those kind of limitations, and we can get students to critically analyze the output to find errors in the output. So I'll show you an example where we analyze for bias, but we can also analyze for accuracy we can analyze for implicit perspectives. And then some sort of a refinement where based on this analysis students refine and build on the output to develop a submission. So these three characteristics aren't in every assignment, but they're sorry all three characteristics aren't in every assignment, but often these one of these three characteristics are multiple elements are inside the assignments. Also, when we're developing assignments, it's worth thinking about aspects of the process and three aspects I wanted to bring out here one is choice. When we think about privacy when we think about equity is giving students a choice to use gen AI or not use gen AI for an assignment. So I've seen instructors doing this where they say, if you do this assignment with gen AI these this is my expectations, if you do it without these are my expectations, or just giving students the option of using it or not. So equity is support. The digital native discussion years ago I think helped create this idea that our students are experts in these tools and while they have some expertise. I think if we are going to use it in assignments, we need to scaffold the process for our students to think about equity, and to ensure they're all able to complete these assignments effectively so what supports are we giving them. Students think about bias and privacy. How are we helping our students improve with prompts, and then because of the emergence of these tools. How can we collaborate with our students to understand them. How can we collaborate with our students on these assignments so just to go through a couple examples with you now. One example I wanted to share with you is from Malik and Malik 2022 and again we'll share all these resources with you. They gave students a concept that they learned in class. They gave them a prompt and for the assignment, they had to generate creative examples, based on this prompt of a particular concept. So by doing this when we think about or they were focusing on the output. They were focusing on the analysis. They weren't to focusing as much on refinement but they were able to take a single concept and build a number of diverse examples. The second example I'm going to get Nicole to go over now. Thank you, Lucas. Yeah so in this example, the instructor is using chat GPT as a source of information for the students to analyze. So in the course, the context of this is that it's a course for art students with no science background to increase their health literacy and so they're given a scenario in which they have to do a little bit of research of which they're guided on how to do, you know, reliable sources for their for their answer anyway so they are asked to get the answer from chat GPT and then analyze it in an assignment. So, again, it's, you know, the students will be discussing the advantages to using it as well as the disadvantages and then having a deeper discussion later when the course or sorry when the cohort gets together for discussions. And Judy, do you want to go? So this is this is not my course this is in food nutrition and health to 300 and 500 level course on food product development. And Dr. Singh by the tap students that original assignments before, before last November has always been asking the students to look for the user internet to look up some food product. And as a food product development course and ask the students to modify the ingredients and the compositions that is used in a food product. You should the output includes examples such as including the system in sustainability of the food, trying to transfer the food products students do need to go back and critique and analyze and also be fine. It's showing that my internet is not stable. Okay, so it's everything is on the slide so I'm asking the students to be fine. The output from the generative AI tools and making modification students do need to know the basic information what ingredient will make the food soft to treat sweets we are. And then they also need to explain why so output analyze and be fine. I am. Wonderful thanks so much and I'm happy it was your internet not mine I thought everyone had frozen there. The last example we wanted to share this is from theater and film Patrick penny father in an emerging media class has his students analyze AI generated images. He challenges them to produce an image of I think it's a scientist working at a computer and analyze for gender bias and what he was finding is a lot of most of the images are male so again this is getting into analysis and looking at the bias of the tool and the piece for fun, use the image of a computer scientist and put it into mid journey as part of preparing to this workshop and do you want to mention what you found. Yeah, yeah, this is actually something so I'm, I'm very involved in the women and, and you know and gender non binary or gender diverse. You know computer science is a lot of has a is usually a certain type of person or it has traditionally been a certain type of person. And so I've been very involved in representations of the not traditional computer scientist. And so I every year, you know, I sort of do a search for computer science professor in Google and it's, you know, not getting that much better in Google but a little bit better I think Google's been sort of stacking the deck a bit. AI will tell it like it is in terms of what is actually happening on the internet. And so yeah so all of my search it or I put into mid journey the prompt computer science professor, explaining something to a student hoping that at least maybe the students would be would be not male, but they, as far as I can tell are all pretty traditional computer science students or how we have thought about them in the past. So yeah and I did regenerate regenerate regenerate I think I did six or seven regenerations and you know things stayed about looking about the same. So yeah and then when I said women in women professor and you know female student then or I said female professor then everything became women and very amazing looking women as well so not necessarily the most realistic representation. But I didn't even want to post. So it's quite a quite a strange experience but yeah so bias in the in the sample set I think is what's happening here. Wonderful thank you. And what I'd like to do kind of we only have about nine minutes left. What I was going to do is have you think about how you might use gen AI and an assignment that you've developed but what I'm going to do instead is in a moment I'm going to stop sharing the screen. And I'd like you to share. For those of you who have already thought about using gen AI for an assignment in your class. Please feel free to share in the comments, or unmute yourself in addition, if you kind of had an idea for an assignment as we were talking through things, or you could see using an assignment based on output analysis and refinement. Please again share in the chat, or by unmuting yourself and finally if you have something burning you want to bring up that you didn't get a chance to say, or to discuss during this workshop please bring it up now so I'm going to stop sharing. And kind of just open it up to everybody, and I'm going to stop recording at this point as well so you can feel comfortable doing this.