 Everyone, this is not really a panel. We have designed a series, some workshops. There will be a series of smart, low touch activities. We would like to know what you think about AI and how you're using AI, generative AI in your classroom. And really thinking about academic integrity, I think every one of us, many of us are actually thinking about our assessment. How do we design our assessment or we design our individual assignment? So then students nowadays with access to generative AI can still learn what we want them to learn to begin with. And so there's a series of watch activities. So low touch activities, some sharing, some talking, but at the same time, we will also be walking with you on this path. Review some of the story. What happened since last November to now? What do we know about generative AI? And then we would like to guide you to think about your own course assessment design and also think about how you may make over some of your assignment. Not all, maybe not all, maybe not at all. But really thinking about our assessment strategies and going into individual assignment. So that would be our agenda for the next hour and a half. So some of the learning objective is really going to be hard not to review some of the prompting technique and how to refine output that some of our students may be using for our assignment and assessment. So then we will talk about some of the strategies to mitigate the use of generative AI. And then of course, we would continue to think about academic integrity and how can we sustain the creativity, critical thinking in regard in our assessment design. So let's get started with our first activities. We would like you to use the link below in the chat and go into a patlet. We really just want to find out what is your experience so far? Which I know chat sheet PT 3.5 has been available for about 10 months now. And more other tools has been available. And let's take a look at what's going on and how much have you been engaged with that. So the first couple of questions on the very top is like, what is your confidence level? Thinking about, we've been using generative AI, we've been hearing about them and how are you going to revise your assignment? And then we may also, please also scroll down a little bit down the patlet. What is your current thinking now? Are you actually going to allow or actually integrating generative AI in your class? Or are you thinking about using some of it but with some limitation? Or no, no, you're not ready. We still don't know what it is yet and you do not want your students to be using AI in your class. So perhaps use the like button, the heart button and tell us where you are. Thanks for sharing, Lucas. And thank you, we see a range of experience like we are seeing more people taking the post, your assessment design, not confident, a little confident and not very confident yet. And that's why we are here. We are going to share some examples that we heard of, that we don't know yet, so that we know some people are using, integrating it in the classroom. Experience, we also have a range. Yes. And then most of you are considering the use of generative AI with limitation in the classroom. Thank you for sharing. Anyone here would like to share your experience and how you, why you click and what you click. And this is a very discussion-based conversation, session, so feel free at any time, just raise your hand or just unmute yourself and share your experience with us throughout the workshop. Thank you very much. Oh, Valerie, you unmuted yourself, is this? Please tell us where you think it is. I'll give it a whirl. Yes. And I've got very little experience with AI. I've just tried it out with your pre-class exercise and I was really impressed. I think I should be using it more. The thing is, I was the person who put, I give the green light, not that necessarily I know what I'm doing and I'm not confident yet, but I think it's inevitable. I think our students, regardless if we put limitations, I mean, I think I would put guidance out there, but they're gonna use it. And it's already out there and it's kind of like, you can't, how would I stop them? From going and checking it out and using it. So I kind of think I'd be fighting the tide by saying, no, sorry, you can't use it. It's like, how am I gonna police that? And actually, should I even be police? I mean, should I even be trying to stop it? Maybe we need to engage with it. We need to harness it and use it to our advantage. Thank you, Valerie. Yes, we will talk about the policing, a little bit of the checking and I was just going to leave that conversation at the right time in the slide back. Okay, so thank you for sharing your experience so far and Valerie for sharing that you find it impressive. And I have so many other questions related that I would like to ask you, Valerie, but let's move on. Lucas, can you tell us the story so far? What happened in the last 10 months? Thanks for sharing that. And yeah, thanks for sharing that example, Valerie. I think it's interesting when we start claim with these tools, realizing what it can and cannot do. And what I wanted to do for this section is I've called it, we've called it the story so far. Just to acknowledge, this is changing so quickly. I think when I started taking a look at this and working around it was early last year and it was quite different than it is now. So acknowledging this change, Dari, and just to kind of quickly introduce myself, I'm Lucas Wright. I'm a senior educational consultant at the CTLT and I've had the privilege of doing about 20 workshops now around chat GPT and generative AI and developing resources. And I think as part of that, I've got to take a deep dive into these tools and also hear from other folks. And I think that's one of our big goals of today's session is to hear from all of you. This is a more emergent space than I've ever seen in the technology area university. And it's a great space to share and have those important conversations. So as part of that, let's talk about the story so far. And the goal of this part is to kind of get us all onto the same page, acknowledging from your poll that some of you see, you're using it more regularly, but some of you are new to this. So kind of getting us to the same page. And at the same time, I want to focus a little bit on some of the capabilities that I'm seeing so that we understand it. So we've created a Google doc for you, manuals, sharing it in the chat. And what that document has on it is all of the resources talked about today, as well as some prompts for some of the demos I'm gonna do later. And I encourage you and I'll mention that when we get to them to join me, open up chat, GPT, open up being try out the prompts and see what you think as I go through them as well. So what is generative AI or generative artificial intelligence? It's a type of artificial intelligence system that unlike AI before it generates things. And it generates images, language, it generates multimodal things now, meaning it can generate, it can read images and generate texts based on them. In order to do this generation, what it does is it uses large language models which have been trained off the internet scraping, some would say our collective knowledge that we put on the internet and it uses word prediction to predict the next set of words. So what this has been able to do is generate new data. And I think another side I wanted to mention in the definition is while this is a predictive tool, it's turned out to be very powerful and including emerging capabilities. So an example is emojis were something that chat GPT wasn't trained to do, but it is able to create emojis. I think another side of these generative AI tools is their black boxes in a way. It's very difficult for even the developers to understand how it's drawing its output, what data it's pulling on and why it's pulling on that data. And that's a really interesting space to be in. So part of using these tools is prompting. And I think a good way of thinking about prompting is that term garbage in, garbage out. There's lots of a prompt is a seed statement that guides generative AI to create the output. However, these seed statements can range from something very simple and generic to something really, really complex. So there was an art piece that one, I think it was the Ohio State Fair and it was prompted using an image generator and it took them two weeks of prompting to prompt the images. So when we read news reports, one of my, something I find interesting right now is there's so many media articles that say, I tried to do acts, I tried to pass this assignment with chat GPT, it didn't work. But really a lot of this is in the prompting. And I'm not saying it can do everything, but prompting is a real art and it's a real science in a way, and I think there's debate over which one it is. So let's talk a little bit about capabilities. And then after that, I'm gonna do some demos so that we all understand the capabilities a little bit. And again, these are very emergent. And then we can use this to think about our assessments a little bit more. So this is standardized exam results from GPT-4 compared to standardized exam results from GPT-3.5. So if you're using the free version of chat GPT you're probably using 3.5. And early media reports and early research was often looking at 3.5. So for example, on the LSAT using GPT-3.5 people were able to score in the 20th percentile. Using GPT-4 they went up to the 90th percentile. So there's a big difference between these tools and this raises some interesting questions around equity. Currently GPT-4 is a paid tool and GPT-3.5 is a free tool. So we have some students paying GPT-4 some students who are still using GPT-3.5 and it really makes a difference in terms of the output. This is an example. This is a paper by Lee et al that I've linked in the resources. What Lee et al did is they found the GPT and they use GPT-4 was really effective at critical reflections. And the context is pharmaceutical sciences. They, what they did for this research is they worked with a bunch of student critical reflections. They used complex prompting to write their own critical reflections based on the same assignment. And then they had evaluators compare them. And what they found is that chat GPT generated reflections consistently outperform student reflections and the evaluators were unable to determine which was which. This is an example from computer science and this is using GPT-3.5. This will have changed a lot with four but for, I think it was second year Python programming they found that students were able to or the model was able to answer student questions 56.1%, 67% and 67.3%. So it wasn't able to ace the course but the researchers mentioned in the article that it would have been able to pass the course. So I think adding on to this, the capabilities are growing very quickly with these tools. And what I'd like you to do now is to do a demonstration across three tools just to kind of show you some of the capabilities what that we're seeing. And I encourage you, if you want, open that document and you can follow along and kind of run it, run some of these prompts through your own tools. So I'm just going to this document that I created or we created and you'll see at the top under demonstration. I've created some prompts. This is an interesting side of prompting the idea of sharing prompts. I know that some faculty have been sharing prompts with their students so that students can use those particular prompts to do their own learning. So I'm just going to take this prompt first and thinking about academic writing. So this prompt is act as an outline expander, generate a bullet point outline of a summary essay about the state of nature and political philosophy and then ask me a bullet point I should expand, create a new outline for the bullet point I select. At the end, ask me which bullet point to expand, et cetera. And I'm using this prompt to show you kind of the interactive quality as well as some of the resource that it can pull on. A couple of things about this prompt. One thing is that I used a persona here. Act as an outline expander. I could also say act as a political philosopher. I also used an interactive tool, an outline expander, which enables us to kind of go deeper and deeper on the prompt. I used state of nature and political philosophy because it's one of the only things I remember from my political philosophy course and undergrad. And I can get a rough idea of the output. So I'm going to chat GPT for now and I'm going to enter this prompt. And this will start giving us an idea of some of its ability. So it's going to come up with the state of nature. If you're familiar with state of nature research, it's bringing out Thomas Holmes, John Jack Crusoe doing a little bit of comparative analysis, criticism and limitations. And then it's going to add a conclusion. And students I think are using in their writing, they're not just doing straight generation, they're kind of using these tools to generate pieces and also to go deeper and deeper. So what I'm going to do now is I'm going to expand on this bullet, ethical and practical considerations. And now it's going to go a little bit deeper. And I can just keep doing this all the way down as I build out my particular piece of writing. So I wanted to include this demo just because it shows a little bit of the power here and kind of this interactive way that students may be using this tool to really work with it. I think Derek Brough from Vanderbilt calls this productive learning in a way but also to refine their product or refine the output. The second example that I wanted to show you is Illicit. And Illicit is a tool that uses a large language model but it's been built specifically for literature reviews and analyzing literature. Can I get a show or a thumbs up if you've used Illicit before? Just going to take a look. I see a couple of head shakes there. Okay, so I don't see any thumbs here. And what Illicit's interesting for is that it actually looks at scholarly research and it references the outputs. It's particularly good at writing abstract or research summaries. So I'm going to paste in this graph. What are students and faculty perceptions of generative AI and teaching and learning at the university? And I'm going to enter that. And what it's doing now is it's searching for papers and it's going to generate a unique paragraph here but it's also, so here's the paper. It's cited, all of these citations are linked and it also includes the papers below and a unique abstract summary for each of them. And what it's generating here is unique rather than it's not just creating an abstract. This is a unique write up of these papers. So this is another example of how students may be able to use these tools both in their learning and also thinking about what the significance of this is in terms of academic integrity. I think it really changes some of our thinking around having students annotate resources and what that might mean and how they might do that. I'm going to give another example now. And this time I'm going to use Bing. So Bing is a, it uses chat GPT but Bing is a Microsoft product and a difference between chat GPT and Bing is it links to the internet. And this makes quite a difference. I find the output not as good. It has a lot more barriers on the output but here's the example I wanted to do with Bing. So what I've done is I've taken an article by it's McKinsey about organizational performance and generative AI and I'm just saying summarize the following article and provide a citation for it. And I could say provide a citation in APA, I didn't do that. What's interesting here is when I'm doing these demos each time it outputs is unique and this is a fascinating part of the tool. So I'm not sure if this demo is going to go off the rails. It's going to do it the wrong way. It has a unique output every time. So let's try this. So now I'm just giving it the URL and I'm asking it for an article summary. All right, so it's able to see the article and give me a summary of the article in this case. And again, I think this has quite a bit of it can help, it changes some of our thinking around annotation and finding things on the web. Now you'll see that it didn't generate the citation for this, so I could just ask it to generate the citation but that's a part of this uniqueness. The next example, and I think our last example here, oh, I have two more examples. One example is the use of GPT or games or interactions. This is the most exciting for me right now in terms of teaching and learning is no matter what our discipline is, is being able to give students prompts, they take the prompts home and they play games with the tool. I do this with my son, he's 13. When he's learning something at school, I write him prompts and he plays a game with the tool to learn something. So I've taken this one from a blog post that a history instructor did and I've linked that in the document as well. So this is a complex game prompt and it kind of shows you how complex these prompts can be. So please play the role of as an educational history, history simulation game for a university class and this is about a quack, a pocket theory and aspiring alchemist in 1348 Paris and it involves turn-taking and commands. So I'll just run this through now. And if you get a chance, give this one a try. So what it's doing now is it's making an entire game that I can play back and forth with it. So I'm inspired, here's my inventory, here's a map, here's the different area and it's my turn first. So someone comes into my shop, what command might I want to take and I'm just gonna take diagnose and we'll write that command and now it's going to go through an entire turn-taking game. So the history instructor wrote this, said it was a great way of engaging the students in this particular historical era. So I'll just do one more turn maybe, recommend surro. And we don't need to ask about this now but I think it's worth thinking about what would a game look like or what would an interaction look like in your discipline that could help students learn and give them some intrinsic motivation, especially as we look at assessment where I think our ideas of intrinsic and extrinsic motivation may be quickly changing. So the last demo I wanted to do is visuals and this is, again, this came out in the last two weeks and what this is is in chat GPT-4, it's able to view images and write about the image that it sees. So I've taken this picture of from Banksy, I don't know if you know this image, this is the one he shredded when it was in the museum and I'm just gonna ask this, what art piece is this and why is it significant? We'll see if it gets it right, it doesn't always. Oh, interesting. Wow, fascinating. I'm just gonna try one more time, we'll see if we can get it to do that. Perfect. So again, that's an example of kind of these changing outputs. One time it says no. So the artwork is titled The Girl with the Balloon, the image typically, this piece is significant for a few reasons and it's going to kind of discuss the overall piece and it's quite good at reading images. I had it identify my son's Lego sets, it actually knew which set I was showing it, it said that's the Guardians of the Galaxy spaceship and that mini figure is Han Solo or it can also identify me paddle boarding and guess my level, it said I think you're intermediate because of your posture and the life jacket you're wearing. So I'm gonna stop with this now, but the point of that demo is just to really think about some of these emerging capabilities and a little bit of where this is going and how quickly it's going there. But there's a lot of challenges and I wanna touch on a couple of these challenges before we get into an activities. And I think if you had a chance to see Brenna Clark Gray present on academic integrity day, it's a really interesting space right now in that I think I talked to a colleague about this yesterday, both of us agreed if we could go back somehow and have a button that just said no chat GPT, no Gen AI, we'd probably both do it. This is a really interesting space and it's really hard to think about these tools in a critical way, but acknowledging that this is continuing. So how do we react to that? One of the challenges is privacy. So the terms of service on these tools are very ambiguous. We know that they're using the data we put in for training. We know that they've leaked user data already and we're unsure of their data use in general. And I think to add to that, we know that Silicon Valley with our experience with social media, really has a bad track record of doing this and if they're gonna do things wrong, they probably will. Couple strategies we can think about is for ourselves is and Brenna brought this up too, what is our role in terms of disclosure as faculty? If we're using it, do we need to tell our students if we're using it to develop assignments? Ensuring our students have opt out options. So I've worked with a number of students now in my work on the digital tattoo who will not touch these tools. They're very uncomfortable with it. How do we help those students? How do we make sure that the assignments we create for them are equitable? Thinking about syllabus statements, this is something if you look at the academic integrity hub, there's quite a bit around this. And I think most importantly, avoiding inputting personal information. So even when I use it to write emails sometimes, I'll blank out the names and say name one, name two, but really thinking about what you're putting in there and kind of a good thought experiment is, how would you feel if this were leaked about what you're putting in? Secondly is IP and copyright. And this is, again, it's very sticky. The story about this, I was in a workshop with a lawyer and she was able to find information inside chat GPT that only she had written about in a article that was not public, that was only in a Elsevier, I think. So how did it get that? GPT scraped things like the common crawl corpus which was web scraped data from the web, but we know that this scraping was full of copyrighted works and it was full of some of your work. So if you've ever posted on Reddit yet, your academic publications, lots of these works were scooped up and if you were cynical, they've been reused, repackaged and sold back to us in a way. A good experiment with this is take your favorite book that was published before 2021, ask it to write a summary of the book and generally it can. I have it create worksheets for me with different books I read. So what do we do with that? I know there's a lot of copyright cases coming out but it's really sticky and really challenging. Equity and we mentioned this already, some students don't use it, some students are really sophisticated in using it and can make it sing, some students are uncomfortable and don't know how to use it well, some students are paying to use it. How can we think about equity in our assessments with this landscape? And I'm gonna turn it over to Judy now who's gonna talk to us a little bit about academic integrity and we are getting close to the next activity just to let you all know. Yes, so on the side here is just an excerpt from the academic integrity site and actually also put in some language on what we should put on the syllabus. And so whether we are going to adapt AI in our courses is a decision that is made by us, individual instructor or based on a program, a department and we need to let the students know how we, what is allowed and what is prohibited. I just click on the link that Inslee share this language about students when they're not sure they must seek clarity from the instructor. So what are we going to tell our students? How are we going to tell them what should be allowed? What is not allowed? It's something that we need to be ready when our students ask us. I think another very important criteria about cheating or misconduct is what we, it's really our decisions, what can be used and what cannot be used. And this decision can change from assignment to assignment. It is of course different from course to course. So letting our students know is a way to make sure it's equitable to know that we are making it very clear what they can do. Because many of our students share with us that that is an anxiety, a point of anxiety, they don't know what can be used. So I'm just going to follow a script. Sometimes I go off script, not sometimes, a lot of the time, but I cannot advance to them. Lucas, can you help me advance to the next slide? I have something blocking me. So again, our students, six out of 10 students, 58% of the students done by in this survey, I think they survey 18,000 students, age 18 and above. The survey was released just before, at the end of August, before time one. So students consider the use of Generative AI cheating. So again, cheating means a foresight use of tools, one of the definition of cheating. And then the same group of participants, survey respondents, 56%, 52% of them actually use AI to help them with their homework. So yes, so they believe this is cheating, but then some of them are still like the 60% and the 15% that they may not overlap, but there are some overlap that they are still using them to complete the schoolwork. And then they also believe that it will help them learn. Close to 80% of them also believe that it's actually going to help them improve their grade. So think about students who are very mindful about the grade that they needed for the scholarship for the graduate school applications. If they know the use of this will help them improve their grade, it's a personal decisions. But again, this is something that we, as the instructor and educator, we need to pay attention to. And I think more importantly, these students who response to the survey, they really wish that there would be clear guidelines around to use. They also want to learn how to use them. They wanted to use it properly and use them to help them learn. And we hear this from the survey when we heard them again at the student panel last Friday, students are telling us that they want to learn how to use it. And then I'm going to pass it on to Manuel. Manuel has been doing a lot of the back behind the scenes support for us. And also on if we don't allow the students to use them and students decide to use them, what can we do? Manuel, please tell us some information that you find out. Absolutely. So really the question that some of you may be wondering when you start evaluating or assessing what students have produced, have they been using JNAI? And so two things, we've been seeing a few JNAI detectors and some of you may be wondering, you know, is that safe? Can I use those tools to kind of make sure that students haven't been using JNAI? And we've been referring to Riyang and Al who published something interesting in 2023 about the caution that we should have when using those detectors in particular. And according to them, they really, I would say, advised against. And that's actually the same position that UBC is having right now is really trying to encourage you not to be using detectors. And MST already shared the FAQ link that actually has a section specifically on JNAI detectors and whether or not it's safe to use them. But the reason is basically is that we have been using Trinity detectors and a number of schools have been using them and actually starting not using them because they find out that there were lots of false positives. That means students accused when they did not use JNAI and they were also false negatives, meaning students were able to bypass those detectors. So technically, these tools are not effective enough. And I have to say, it's reassuring to have that sort of ethical consideration whether or not it's safe to be using those detectors. Something we didn't do properly, I would say in 2020 when we had propering software that were being used and then we had a lot of concerns for students in particular. So I really invite you to look into this academic integrity FAQ page that gives you a comprehensive information about use or not use of those detectors in particular. In the next slide, so that's the perspective from the technology, but from the human, it was also not that easy to make a decision whether or not students have been using JNAI to submit an assignment. And we had linguists, we're supposed to be really, really good at investigating how we write and they were in 40% of the case able to identify whether or not some output was actually produced by JNAI, which is pretty low, I would say. So there are different questions, but we may be able to detect poorly used AI. That's a potential, but Lucas has been showing us that with good prompting, you can get quite sophisticated outputs and I'll have to say pretty fascinating ones. So the question really is about how can we hope to do this at scale? And I have to say that the question is really important for essay-based assignments and also for, I would say, large classes. So how can we just deal with this? And it's not an easy question. And I have to say that it raised ethical concerns that may probably deter the trust between instructors and students. So definitely not easy conversations, but important ones to keep in mind. Yeah, and thanks Manuel. And so we're at the activity now. Hopefully that kind of brings us all to the same page a little bit thinking about JNAI. And for as a pretask assignment, we asked you to take an assignment prompt and run it through GPT. And I think we gave you a couple prompt suggestions. And what we'd like to do now is to kind of hear from the group about what that process was like, what you found out, what you noticed. Valerie, maybe you can even start us off because you were jumping in there a little bit. In a second, we're gonna open up a padlet that Manuel shared, but what I thought I'd do is just stop sharing the screen and hear from a couple of folks about what their experience was, kind of kicking the tires or running an assignment. So I'm just gonna open it up to the group now. Valerie, did you wanna go ahead? Yeah. I'll just start briefly. I just put in a very simple assignment which was looking at producing an informed choice discussion on a topic. And it came up initially was quite brief, but it was very logical. It included a lot of what I, you know, the groundwork. And then I used a couple of the prompts. I didn't get through all of them, but they definitely brought up, one of them was to make up the questions to ask me about the topic. And it brought up more that, again, like so deeper, it would have added to, and would have been all very valuable to student inquiry. So I was really impressed. Right, thank you. Other folks, did other folks have a chance to do this? I wanna share your experience. Yeah, Daniel. So I teach film studies and in a second year class, we started with Charlie Chaplin's film, Modern Times, which is about the fear of mechanization kind of dehumanizing world. And so I, as a starting assignment, teaching them how to do film analysis, I said, let's race, chat, GPT. And so I gave each group, I said, I want you to watch minute 10 to 11, I'll ask you to provide kind of a film analysis of that and compare that to what chat GPT developed. And what we determined is that chat GPT doesn't actually watch the film. And so therefore they could actually talk in finer details in a way that chat GPT would provide all these wonderful kind of big concepts because it would draw down on criticism from the film. But it's ability to connect to a kind of formalist analysis of the editing, the shot and the grammar. That was something beyond its can. And so I sort of said to them, I think my job is safe for now. But it was also like, yeah, it's kind of interesting. The way it's made, many of these claims that are unsupported and it can't support them because it can't actually say that the camera shifts from this perspective to this perspective. It can't do that work yet. And so that was sort of my kind of like, if you want to use chat GPT, go ahead. But I kind of know because it's going to look like this. And about a third of the class who were there, they were really impressed or like, wow, this is great. But then when I began to kind of tease apart its writing, they're like, oh yeah, you're right. None of the claims are really meaningfully supported by the concrete details of what is seen on the screen. And then that to me was kind of, it was a really good exercise for them to kind of see that's a tool and it will provide you stuff. But if you don't understand that stuff, you don't know how to apply it to film analysis, you're going to be a little bit lost. That's fascinating. I love the topic of the film as well. But yeah, that's so interesting, not being able to watch the film. I know Manuel and I yesterday were testing that image thing that I was showing and we noticed that we were putting up a picture of Napoleon's coronation and it wasn't able to identify Napoleon. And it was like it wasn't seen the picture, it was seen the picture, but it wasn't. It was, it reminded me of that. I don't know what scene is. Yeah. Like even in your own example, give me this history of political science that if you're dealing with non-Western authors, that answer was incredibly Eurocentric, that outline that was developed. And so again, like coming from cultural studies, where we're looking at mainly non-Western authors, you put in a type of track of Ortiz-Bivac, chat sheet BT is like, I do not know that person. And that to me is again, pointing to like, if you want to reproduce a certain data set that has been designed and is incredibly Eurocentric, knock yourself out, but that's not what we're doing today. Wonderful. So that's again, another way of kind of pointing to the limitations, it'll smell like chat sheet BT. I love that. And I mean, is there a productive learning space in those limitations? And I'm seeing that a little bit more and more is that kind of productive space of analysis and saying, here's the bias, here's the missing piece. Thanks for sharing that. Maya. I did my homework very quickly. Yeah, yeah. And I'm in the phase, I never use this in my classes because I haven't taught since April, but I will as of January. And I'm in the phase of preparing my assignments and I'm thinking I have to revise them completely. Based on what I've seen, I teach introduction to soil science course, which is a second year course, but it's taken by first and second year students. And it's the course in which it's a science course. So it's factual and it's a course in which they are learning concepts and we are trying to do integration of those concepts at least at some beginner's level. I ran one of my questions from my previous assignments through chat GTP and I just got a free version. What I got as an answer was really good, which told me I can't use questions like that for take home exam or assignments or anything that's take home because how do I know who is using it and who isn't? Then I asked my son who is a fourth year student in psychology to run the same question. I then realized that he has a version four. I didn't realize he had it. And he did and then what he got were three versions of the answer. The precise and balanced type of answers were bang on. They were completely correct because the question was to provide advantages and disadvantages of some method of measurement in soil science. But the third version that was creative, that's where the answer started to fall apart because that required more in-depth understanding of the method and its application in the use. So that tells me this simple exercise tells me that if I am to pose questions that are simply factual, give me the advantages, list of advantages and disadvantages of a method, this is not going to work as a question in assignment. Another thing that I am thinking of doing is to reduce the percentage of marks that I used to give to these take home assignments for the overall mark and increase the percentage of marks for the exams for both midterm and final exams, which I'm going to put as in-person. Because that's in my view, that's the only way that I can really test the individual knowledge of students. I don't think as somebody else said before that I can prevent them from using these softwares. And the fact that Juni mentioned that they want to learn to use them, I want to learn to use them, but if it helps they're learning by no means, but they still need to learn. And the only way we can find out how much they know is by having in-person testing. Because then I honestly don't see how can we have online take home assignments or exams with this being out there and them not using it. Some will sign one, but the equity really comes into play. Thanks for sharing that. No, those are all really great comments. And we've shared a padlet in the chat if you wanted to add your kind of experience with generating your assignment. And I think what we've done so far is we've set up the question and we've heard some interesting approaches by Daniel and Maya kind of seen the edges of these tools. And what we want to do now is talk about a little bit about what the literature is saying and what kind of our experience is thinking about about redesigning assessment. And the way we've developed this is we're thinking first at a higher level and thinking about how might we think about our course assessments? And then we're gonna be narrowing down a little bit and think about how might we think about our course assignments? And we're gonna end off thinking about how might we use generative AI within our classes? So we're kind of moving in that trajectory. So three questions that I think it's important at a really high level to start asking about our courses. What are the skills or competencies in your discipline or your context that really need to be preserved so that students can't offload to a tool but they need to develop those skills in the educational literature. Sometimes these are called threshold concepts. What are these key concepts they need to learn? Secondly, what new skills or competencies do you need to reconsider how they are assessed? So what are some new skills and competencies coming out there? And thirdly, and I'm hearing this a lot in the context of professional programs. I did a presentation in law and pharmacy and in both cases it was this discussion of we need to think about our assignments now but we also have to look at the profession. Is law, for example, is starting to use AI very significantly in the courtroom. And so what do lawyers need to know? What do practicing pharmacists needs to know? And I know this doesn't apply to all disciplines but it's kind of a larger question to ask using these tools because at least, and I'm not an optimist as you can hear, I'm very pessimistic with these tools but I do feel that they're going to make a big impact on different careers. So if part of what we do is to help students think about those careers or those vocations or those disciplinary areas, what new skills are they going to need? And I've linked this in the chat, or sorry, in the document I shared with you, but in Australia they've developed a document, kind of an early stage document. I think it's for New South Wales looking at assessment reform in the age of artificial intelligence and I think that was released this summer. But one of the things that I pulled out of there that I think is important is to, there's an opportunity now to move a little bit beyond the product of what we're doing and start thinking and breaking down the processes of what we're teaching. So I think they've done this a little bit in math when we think about math teaching and how the calculator and then Wolfram Alpha change math, there's a lot in math now about skills and process building. So by thinking about process more, we can get a better idea of what students are doing over time. And an example of this would be developing an essay with multiple formative check-ins on the essay. Product focus assessments, and again to pick on the essay, make it really hard to see the learning processes. There's lots of skills, there's lots of learning process involved in it. So thinking about how tasks can show students thought, process and skills. And this can help us think about critical thinking, decision-making and areas where AI might fall short. And I think from both Maya's example and Daniel's example, we see these areas where there's still limitations on generative AI. And now Manuel is going to go over a couple other specific assessment approaches we can think about. Thanks Lucas. Yes, so we've come up with a list of potential assessment strategies and that's actually coming from a resource from the University of Michigan. And I'll be actually sharing the link in the chat in a few moments. But it's basically a list that you can see as suggestions of changes that you could make, should you decide to make changes to your assignments. So in bold, basically, this is the pedagogical strategy considered. And underneath in the bullet point is the actual assessment method that you could explore or implement in your course. So these are really strategies for engaging and assessing students beyond recall of information or factual information, just like Maya was saying. The other things that are just too easy that JNAI can produce. So what can I do to kind of push the limit a little bit? And these are strategies that are considered effective to rise above JNAI tools. They're not gonna be perfect. Depending on the discipline, they may actually be effective or not. So you may have variations in the effectiveness. That's for sure. But we just thought that we would be sharing those with you. Those that some of you already mentioned saying, I may do more in-person examination. That's something we've heard across campus already with different disciplines. People moving away from take home exams in particular or trying to adopt, let's say, lockdown browser. And I can see in the chat that some people have different views on the effectiveness of this. And I'm happy to respond. But having more in-person, so live debates or representations, role play and so on and so on. So a variety of options. But like always, it's not always gonna be the perfect solution to everything. So Lucas, if you don't mind, if you could show the second slide with additional techniques. Perfect, thank you. So really the point is going back to your objectives and what it is that you're trying to measure or put in practice and reflecting all those and comparing with what you and AI can do or cannot do and then say, okay, what is the safe place? But you're not alone on this. And at CTLT in particular, we have a lot of people happy to have conversations with you and help you up with this task of redesigning your assessments. Another slide that I have, it's basically more like a graphic that shows how your assessments might respond to AI. And that's also coming from that results from University of Michigan. We've just added a little item based on Brenna Clark Ray presentation on Friday that has an ethical consideration with the value element. So basically, if you don't think these aligns with your values, then you have a stronger position to it. You may not want to think about integrating gen AI in your assessment. So there's a, I would say like a process you have to go through just to see how your assessment may respond, but somewhere always keep your values and say, is that something that I truly believe would benefit from using AI or not? So I guess this might turn to this slide. This is from Mickey. So aside from working at CTLT, I also teach a class in the summer. And then, yes, in the summer. So I teach a summer course as in a second year level course. And in the summer, I have a lot of students taking my course as a elective. So I just wanted to give you the context of the course and what I did with some of my assignments. So there are three assignments where I see the use of generative AI could be an opportunity and also a challenge. For me, as an instructor, when I need to grade the students' work or to assess the learning. So I'm just going to talk about the three of them and what is my decision making. I, for the other two, they are the quizzes, online quizzes. They are online quizzes. And I, as all fat bases, but I do not see, because of the course, I do not see the use of AI as a potential challenge or worry. So again, this is just the three assignments that I am going to share. Let me get my annotation tool up here. Okay, so my first assignment here, thank you. The chat is telling me that there's an upgrade here. Thank you. My first assignment is at the beginning of the term. A very first assignment is a very easy, very easy grade. Pose a picture of a food. This is a food science course. Pose a picture of a food that you like, that you eat every day that is important to you. I really try to make the course, the assignment very personal to them. So, and tell me a little bit about why this is important to you and share with me some of the questions that you have. I really see very little use of AI, but in the end, a few students actually use AI. I suspect a few students use AI. They, why this is the importance for you and why you choose it? They drive a beautiful essay. Beautiful. I've never seen these students in my elective course that would draw a beautiful essay with lots of adjectives. That is a little bit excessive to describe why the food is important to them. This is okay. This is a personal story. I do poke a little bit about it. For the students who wrote a beautiful essay, I said, I would love to read other essays that you've written. Or I ask, I poke them and I say, why is this important? Why do you want to know about these questions? I really just want to challenge them a little bit and let them know. I wanted to let them know. I do, and then I would adjust to the class that I noticed some use of generative AI, but I am not sure. So I do talk about my observation with my class. So with that, at the end of the term, they will have to respond to those questions. I asked them to make very specific reference to the course material. Which week, which lesson, which chapter do you learn about this? That response to your food. Many of my students, I also share with them what I did with generative AI. How I use generative AI to make my assignment, my response better. So I am not restricting them to use. I actually try to share my experience. And unfortunately, in the end, I still detected some students just use generative AI to give me a few bullet points that is not relevant. They cannot tell me where in the class that they learned that fact. So I think I, so I did also change my rip break or my grading scheme a little bit. I make that specific reference to the course. I make sure that it's a higher percentage than in the past. And, but in the end, I still, I look at my rip break a lot. I try to compare what my suspects and I keep asking myself, do I still want to give the student two out of eight, three out of eight for the fact that the students actually try? So again, it's a lot of thinking in terms of the rip break and how, what am I going to do when the students sort of use generative AI but not use it to the extent that will actually help them learn, right? And where is the line, right? Where, where is the line? And how can I detect that? That would be something that I would need to continue to monitor and reflect before I teach the course again next term. And I also have a term project, a term project that is done by a team. And unfortunately, even though it's a team of a group of strangers, the students who come together, I do all the team building activity exercise. There are a couple of teams that will submit that online, the draft that is, I'm going to say it here, almost no doubt that is done by generative AI. And then to point it to them, I actually in my feedback, I'm like, hey, I also get a very similar response from chat GPT. Seriously, please be fine online, please change your online. I would like to see it to be a little bit more relevant to the course. So, and there's one group of students continue to give me materials or response that is very suspicious. I am still learning. I will look at this assignment and the work that is going to be by this team and really think about the language or how I can guide my students better when I offer this course and use the same assignment again next term. So just a little bit of my story and a little bit of success. I think some students are using it well to a point that they are able to demonstrate their learning to the level that I specify, but there are still more work needs to be done to help our students learn the materials that I want them to learn in my course, but at the same time use this tool better for their own learning. So that's my experience. Wonderful. And I'll just get you to clear your annotations, Judy, if that's okay, although it's kind of, it's colorful. Yes, I know. And then if Manuel or Judy could share the document again in the chat, that would be great. I know a couple of people aren't, would like access to it. Thanks so much. It must be the update that they did that I cannot find the clear slide, but I will keep working on it with me with the beautiful choice. I can probably do it here one moment. All right, I got it. Perfect. Thank you. So what I wanna talk about now is, so we've talked about assessment and what I'm gonna do now is talk a little bit about assignments, specific assignments, and then we're gonna do an activity where you kind of look back at that assignment that you ran through and think about what an assignment makeover might look like for that. So when thinking about course assignments, I grabbed this paper that came out on October 12th from McLean's, and I think there's some questions around what the university essay is going to look like. And this is a bit of a provocative title, but the idea that the university essay will die out. And so Kumar gives the example of how basic arithmetic skills changed a lot when the calculator was developed. But I think it's worth looking at that big essay, what skills are inside the essay? What are we assessing with an essay? Is there a way, a different way of assessing this because this is one of these areas that Gen AI has been quite good at developing? And again, perhaps the essay has been used to assess a lot of things within it. So what might we think of for an individual assignment? One, and to kind of reflect back on that survey that Judy shared, we know that students really want guidelines for how they use AI for different assignments. Where can they use it? Where can't they use it? What can they do with it? What are the boundaries for it? Can they use Grammar Lee to check the Grammar in an assignment? Can they use ChatGPT for that? So being really careful about lane out guidelines. Secondly, is thinking about scholarship and citation or even doubling down on it, although I don't know if we can double down on it at university, but while these tools can cite references, there's going to be a lot of gaps in those references and it's gonna help us understand what perspective the students are bringing. When we ask Gen AI to produce things and it doesn't give us references, we don't know where it's drawing from. We don't know of where its biases are. We don't know whether watch the film, but by getting students to create these citation trails, I think it's good practice for scholarly work, but it's also a better way for them to move beyond the black box and to us to understand the extent they're using it. And process focused. I'm very interested in transformative learning and the transformative learning process. That's something that AI can't do for students. So thinking about those learning processes that students take and breaking down different products into process areas. Secondly, and I've seen this done more and more is including reflections on the process as part of the grading. So rather than submitting an essay or a portfolio assignment, having pieces of the process where students are reflecting on the group work, reflecting on how they did something. I think math again is an interesting area to look at in the way that math gets students to show their work. So they're writing questions in an exam, they're showing their work. And in a way, getting students to reflect on say an essay process is a way of getting students to show their work in this area. And acknowledging, and I think a couple of you have touched on this already, this is a chart that was adapted by the academic integrity hub from a UNESCO document, but it's a flow chart. Start, does the output, does it matter if the output is true? No, then maybe it's safe to use chat GPT. Yes, then we need to think about it further. Right now, chat GPT, generative AI is not an expert. It's missing a lot of things. And I think this gives us a space to work in. And we saw that in the examples that Maya provided. We saw that it in the example around film is we're still the experts. And this is a tool that is very different right now when novices use it, from when experts use it. I just recently went through a discussion with my mother where she wanted to use chat GPT to develop a legal document. And it's like, well, okay, but what are you gonna do if a lawyer doesn't look at it? It's going to be a meaningless legal document because another lawyer may be able to poke holes in it and you won't know whether they can or not. So we're still in this expert space and what does it mean to help students develop mastery in this space, which is something they still need to do? And similar to this and a way to help them develop mastery without offloading cognitively to these tools is integrating in-class assignments when appropriate. So, regardless of what they do during the year, if they have an in-class assignment at some point, they're going to have to develop the skills that are being assessed. And a couple ways that folks are doing assessments in the classroom. I don't need to spend long on this. I'm sure all of you have your ways of doing classroom assessments, presentations, invigilated exams. Assessed active learning is an interesting one. I think in the sciences, in classes like physics and biology, for a long time we've seen things like peer instruction where students are using clickers to be assessed on their active learning activities. So is there a way of adding assessment layers into these activities? So, Judy's going to walk us through an activity now on making over that assignment that we ran through. So the questions that we'd like to ask you to reflect on and think about your assignment are on the slide. So thinking about your assignment that we also share some of the questions in the P-task. How is it going to affect the goals of the assignment? And again, we have to ask ourselves, what is the goal of the assignment first? We might have been using the same assignment for years and it's sort of work. And now we have to ask ourselves, does it work? Does it matter? Let's ask ourselves what are the goals and the use of AI? How it may undercut the goals? And then how we maybe mitigate it? Is it just simply moving everything into an in-person exam? Or let's look at something more positive. How may it enhance the assignment? Where will students need help figuring that out? Like I'm briefly sure I do see places where I can enhance it. I try to share my strategies, but could I do more? And in your course, what can you do? What should you do? And then focusing on the process. How could you make the assignment more meaningful for students or to support them more in the work? I think I share with you my two failure, the two places where I thought I wanted to make it meaningful for students, but still. But I think I should also pay attention to the positive things. Maybe it's still working as students are telling me that they use AI the way that I intended them to use to. And again, how may students use the tools while working on the assignment? So I would like to ask any thought, any thinking, anything that is bothering you or there's some ideas that is popping in your head that you would like to share with the rest of us so that I can learn. I teach a class of 120 students and I'm very comfortable with silence. And at the same time, I also assume that many of you are actually thinking. Which assignment? Is it the crisis? Is it the take-home? Is it the project? And or are you thinking about making it multi-model? Yes. Just like what manual, share some of the pedagogy. Yes, deeply using it to learn. I would like to ask how we can use it deeply using it to learn. I think one of the strategies I think there's something that we have not mentioned. We cannot we cannot really require our students to use it because of privacy issues. We can still teach them how to better use it. And the other one is our experiences. When we ask our students to use the same prompt, when we give them options to opt out, for example, each students will also get slightly different response. So deeply using it to learn. So there will be conversations. For me, deeply using it to learn is where we will have conversation in the classroom and share our experience. But I would like to hear the ideas on how we can use AI to help us learn more with Deeper. I think for my just the very simple assignment that I chose just to look at with this, which is quite a basic assignment, which was the informed choice. I think I'm okay with them doing an initial search for information with AI, but then they have to take responsibility for the information that they finally do present. They have to do a role play in class. So it's not a written assignment. It's a role play that they have to do with their peers and the peers act as clients. Then who would ask them questions. So they have to show their understanding. It's not just a performance. So that they get tested on their understanding of knowledge and of that information. So I think that helps it because it's more process than just product. And I also love the interaction between students. It's a human-to-human interaction. Yes. I think because the range of assignment that we ask our students to do with the number of different activities we we and the different goals, for example, there are assignments that I do not like and Valerie shared with that too. Some of the assignments just let them use it. It's okay. But there are other assignments that we may need to do a big group makeover. And hopefully to actually use AI to help students learn more. That sounds like a segue, Judy. Yes. Should I jump in now? Yes, please. Wonderful. And I think Dorothy in the chat mentioned that lots of these assessments are observing chat GPT in its flaws instead of deeply using it to learn. So kind of in the last eight minutes what I want to think about a little bit is what does it look like to use it as part of the learning process as well as mitigating it. And to lead off this it's worth thinking about and I think we've talked about this a couple of times is just around ethical use. And Brenna Clark Ray on academic integrity has brought up, you know, for some folks just using these tools is unethical and I, you know, I totally acknowledge that. But thinking about if we are going to use it what guidelines and expectations are we providing? How are we thinking about choice and learner equity, especially if some learners don't want to use it? How are we supporting students using these tools again to help ensure equity and something that's come up through also through Valerie Irvin at UVIC who's a professor there is how are we bringing the student voice into this conversation? Are we getting students to collaborate on thinking about these assignments and thinking about how to develop assignments and assessments? So one way and I think this is a bit of a lower hanging fruit right now for using these tools is using it for student support. So this is a tutoring prompt so giving students prompts so that they can take these prompts home run it through Gen AI tutor things. I mean the challenge here is that the tutor is not always accurate but that can be part of the productive learning space or, you know, give them games like the example that I gave so that students can go play these at home or something that Ethan Malik does who's a professor and forgetting the university and integrates AI quite a bit is he'll have students take a concept and generate multiple examples because we know that for student learning it's helpful to have different types of examples for theoretical concepts. This links to the game as thinking about as playable media. What does it mean to have this interactive tool that students can play with? So I think this is a bit of a lower hanging fruit. We still need to think about student equity but we can do this without building it directly into our course assignments. We can also think about what it might mean to develop these tools and allow students to use them as courts of their assignments and sometimes these assignments involve these three areas. So something that students can output from Gen AI so either giving them prompts and having them create an output or you know giving them an output have students analyzing the output for Gen AI and in the film example you know having students critically analyzing that output to see what was missing. An example I'll show you in a minute an instructor from theater and film Patrick Pennyfather had students analyzing AI images for bias so what is missing and how can that be used as a space of productive learning or something that's done in a couple courses in education in UBC is having students build and refine the output based on some of these gaps that we're seeing within these tools. So this is the example I mentioned from Malik and Malik where what he's doing is giving students a prompt that they take home and having the students generate examples based on the concept of that prompt. A second example is for this is a health class at UBC having students generating an answer to an assignment and then comparing the answers similar to in that film example to determine the quality of the output what was missing what was the perspective embedded in it etc. This is a similar example from food science where the faculty member had the students develop an output of a food product to make something sweeter, softer etc. generate these food formulations and then analyze the output of what was generated. And this is the theater and film example that I mentioned with Patrick Pennyfather so students were asked to use an AI image generator to produce an image of a person working on a computer and analyze the output for gender bias. So just like when you put in a search to Google when we look at the outputs of these tools there's still a lot of bias built in someone commented on my LinkedIn that whenever they ask it to role play the role of a doctor it's always a male doctor whenever you ask mid-journey to act as a computer science professor it's generally a male professor often also with glasses and a beard I'm not sure where the bias sits on that one but this can be again a space of productive learning. So this brings us to the end of today we had a final activity where you were going to share how you might integrate generative AI in your teaching but just to kind of summarize what we've touched on today is we looked at kind of overall getting on the same page about what generative AI is we talked about capabilities we talked about some challenges we talked about what it might look like to revise our overall or think about our overall assessment and then delved into specific assignments and looked at what changing an assignment might look like integrating gen AI might look like as well within an assignment so we wanted to thank you for taking the time today to join us for the session and some of the great sharing that you did. It's always wonderful to hear from folks sharing what they're doing and thinking in this what is a really really emergent space