 So welcome to Approaches to Prompting and this is one of our 30-30 sessions. By 30-30, what I mean is for the first half, it's an interactive presentation. And the second half is a discussion where we hope that you'll share some of your approaches to prompting, share how you're thinking about prompting. I will hedge that a little bit. We wanted this session to be a little bit more interactive for the presentation. So this session could map out more like 2040 or something around that. And before we get going, let's just introduce all of the facilitators, and then we'll jump into the session. So my name's Lucas Wright. I'm a senior educational consultant at the CTLT, and I've been working quite a bit around generative AI since about November of last year. And I'm really excited to kind of work through it with you. And Rich. Thanks, Lucas. Hi, folks. My name's Rich. I also work at CTLT. Technically, my title is Programmer Analyst 2, which is a long-winded way to say that I work with WordPress quite a lot. I too have also been using generative AI tools since around about November. We've built some new bits and pieces, which I'll be showcasing a little bit later. And yeah, I'm really excited to sort of play with these tools, as well as learn from the way in which other people are playing with them too. Who's next? Judy? Judy Chan here. I go by She, Her, Pronoun. I also work at CTLT. I've been mostly playing with generative AI, but I started using it more in the last few months just because it's quite powerful. So I'm happy to be excited to be learning with all of you today. And you've used it in your class as well, just to kind of mention that for everyone a little bit. I introduce this to my students. I encourage them to use them. And yes, thank you. Thank you, Lucas. Forgot about my other identity here. Nicole, over to you. Hi, I'm Nicole Ronan. I'm a learning designer at CTLT, and I work with faculty and students to come up with some ways to work with AI in the classroom. And I'm really excited to hear about people, what they're doing or what they're curious about today. So thanks for coming. And off to you, Lucas. Wonderful. And before we get going, I just wanted to acknowledge, I'm coming to you today from Port Moody, which is on traditional cook with them, Sailoa Tooth Stolo and Musqueam territory, where I'm grateful to live and work on the days that I work from home. I also wanted to add this quote from Michael Runningwolf. Michael Runningwolf is a Cree from the Cree Nation, and he's a data analyst, a PhD student in Canada. And I'll just give you a second to read this quote, because as we're thinking about the scraping that generative AI does, I think it's worth moving a little bit beyond even IP in thinking about how it's located within colonial structures. So as a very specific example of this, I can go on to chat GPT and I can learn about the different and get pieces from the different Indigenous resources that have been developed at UBC without any attribution with where they're from. So again, today's agenda is 30 or 40 minutes interactive presentation followed by a 30 minute discussion. And let's jump into the interactive presentation now. I think the best way for you to learn in this session is to follow along with it. And Nicole is going to be sharing some links in the chat now. We have a guide that we created, and this guide goes over all the content that we shared. It has examples of the prompts, as well as links out to the resources and links for each of the activities. Secondly, you may want to have a generative AI open. If you've used chat GPT 3.5 or 3.4 and have an account, sorry, four and have an account, please jump into those tools and have it open beside you. If you're not comfortable signing up or you haven't yet, you may want to use one of the more open tools. So talk AI, it's a little bit spammy with ads, but what they've done is they've added GPT 3.5 so you can go there without a login and kind of play with prompts. Perplexity AI is a different LL, large language model. You can use perplexity AI. A lot of this session is kind of the prompts work best with chat GPT, but we've tried them with these other tools as well. So please follow along with you. We're going to be getting you to kind of go in there, try out these different prompts as we go. So let's start just seeing where we're at. In these sessions, what I've been noticing is that people are coming from such diverse backgrounds and how much they're using these tools. So Nicole's going to share a link to this padlet. And on the padlet, you're going to see a poll asking you about your experience with generative AI. You're also going to see a number of categories of prompting in different ways of using. And what I'd like you to do is use the like or the heart button just to show how you're using generative AI. So I'm going to stop sharing in a moment. I'll just give everyone a chance to hop onto that padlet and I'll share this together with you. I'll just give everyone a couple minutes to start filling that out. And I will just get to the padlet. So what you'll see is on the left hand side of the padlet, there's a poll that you can vote on. And I see there's 21 votes here. On the middle panel, how you're using generative AI. And we can kind of see the hearts going up there. And on the right hand side, different approaches you're using to prompting. And if you're interested, we've also put a kind of a blank spot here. If you want to share some of your approaches now, you can do that as well. So I'll just give you another two minutes on this. Again, it's just to kind of get a read of the room. What are we bringing here? How are we prompting these tools? What do we know already? So it looks like quite a few of you are using these tools to synthesize information, brainstorming, writing and editing support. So someone also mentions brainstorming here. Less of you are using assignments. And that's a trickier space right now, especially with privacy and copyright. Creating teaching materials is something we're going to talk about today. And then the different approaches you're using to prompting. So it looks like a number of you are evaluating the responses. Some of you are using few-shot, which is where you use examples to train the model. And 12 of you are using persona, which is actually the first approach is we're going to talk about. And I see someone mentions, I'll use it with my own writing, give it a few paragraphs, ask it to summarize and evaluate. And I'm not sure if you're doing this, whoever wrote that, an interesting trick with that is to ask it to create a table of changes below the writing to show what it did before, what it did after, and why it did that. So you can kind of see what it's thinking and incorporate that as well. So let's take a look at the poll now. So it looks like 14.7% of you have no experience with generative AI. 50% of you have played with a little, 23% have played with it a lot or experimented a lot, and 11% have used it at home or work. So if you have no experience, try copying and pasting in the prompts we share today, or even just listen along. If you're on the use it regularly at home or at work, please try to share some of your understanding of these tools. One of the reasons we made these 30-30 sessions is because this is all emergent, and we want to hear what you're sharing, how you're using it, so we can kind of develop together as a community. I'm going to jump back into the slides now. Thanks so much for sharing that. And I'm going to turn it over now to Rich, who's going to introduce prompting to us. Thanks, Lucas. Yeah, because we've got roughly 15%, or so that have hardly ever used one of these tools. And sort of about half of us in the room are sort of relatively new. We need to sort of set a baseline. So there's a bunch of people that are already here that have loads of experience, so the next two minutes might not sort of be amazingly relevant for you, but we'll get there, I promise. So what is a prompt? I don't seem to be able to change the slides, because thank you. So what is a prompt? This is the Wikipedia definition of a prompt. Essentially, think of, you can go ahead and read this, but the way that I personally think of it is a way, it's a question that you can ask, or a statement that you can make with the understanding that a large language model, or a generative AI tool, is going to give you a response back. And just sort of adding to what Lucas said a minute ago, I don't believe there is or are experts in this field yet. As far as we know, really, these are publicly available for 10 months, not even now, 10 months. And the people that make these tools, so the engineers, for example, that open AI that make, chat GPT, don't fully understand the outputs that the models return back, the large language models are generated, the people that make this don't fully understand how it works. So the concept of an expert, in my opinion, doesn't exist. And that's why these 30 plus 30 sessions are really important because we would love for those of you who have some experience in using them to share. Because, sure, Lucas and I, and a few other people in this room, have had months and months to sort of use them, but we're not experts. And we would love to see how other folks are using them. And you use them via prompts. That's how you get results back. You put a statement or a question or a series of questions into one of these generative AI tools, and you get a response or a series of responses. So what can you get out of these tools? The most popular one that I think most of you will have heard of is chat GPT. And chat GPT is a generative AI tool. It produces words. It produces, it is a chat. It's a chat that you can have with a robot, if you like. And what can be generated from chat GPT? Well, language. But that's not all that can be generated by chat GPT. And it certainly isn't all that can be generated by large language models in general. So code is something that chat GPT and others can also be produced. And images and video come from other tools. Chat GPT itself has an associated image tool, which can produce images and now video. And then there's a bunch of generative AI tools that are separate to chat GPT and perplexity AI and talk AI. That can do things like protein structures and text to image and music. And a whole bunch more. In fact, one of the slides that we're going to see in a minute was generated by AI. It'd be interesting to see who can guess which one, because I didn't believe it when I first saw it. But it was by Lucas who sort of got this going. And it was entirely produced by generative AI. So one thing we do need to talk about is at a much higher level about these tools are these four concepts about privacy, copyright, accuracy and the tools themselves. So very briefly, the lawyers have asked us to say things like none of these tools that we are showcasing today or that you see on the ai.ctlt.ubc.ca website have been through a privacy impact assessment yet at UBC. What that means is none of them can be required used in courses at the moment. There's no official updates as to when they are coming, but the privacy impact assessment team have received several requests from members of faculty and they are progressing down a route to get that done. Further, for tools like chat gpt, they require you to sign in. And because they're hosted all over the world actually but predominantly in the US, it means that the data is not stored in Canada. So when you use these tools, that data that you put in can be and probably will be collected and then reused. So what that means is it is safest to assume that any information you put in to chat gpt and tools similar to chat gpt will be stored and can be reused by those tools in the future. Now there is, speaking specifically about chat gpt, there is a setting that turns off history and the setting itself says that any data you then put into chat gpt won't be reused by chat gpt in the future. However, if you actually read the terms of service, it is very vague. So again, it is safest to assume that you should not put in personal or private information into these tools with the understanding that it will probably be reused in the future. Okay, and in terms of accuracy, which is the bottom right hand corner there, accuracy, yeah, these tools are generative. The g in chat gpt stands for generative. I like to think of these tools as not regurgitation machines. Google regurgitates facts. Chat gpt generates information and sometimes it generates information that just isn't true. For example, you can say one of the prompts I used many months ago was, without naming them, how many countries begin with the letter B? And the first time I put that in, it gave me, it said three, and then I press regenerate, it said four, and then it press regenerate, and it said five. And I don't think in those 30 seconds of me regenerating, more countries had been named, but basically it didn't really care for the fact. It was just generating information. So remember that it might not always be accurate. And the goal for using these tools is to generate new things, not necessarily regurgitate facts because that's what Google and search engines are for. Okay, I can't again change slides, Lucas. It's stopping me. Thank you. So loosely, what is a prompt? Well, it's something that you put in a box that looks a lot like this. This is the chat gpt one, and you get answers. So we've been through that briefly. Again, I won't let me change prompts, Lucas. This is an example of quite an advanced prompt. And we're going to jump back to what a basic prompt looks like in a minute because I'm going to show you an example of something that I did. I'm going to read this out, but this is broken down into three specific things. You are a political science faculty member. This is something that you will type into chat gpt by the way. So you're saying to chat gpt, you are a political science faculty member at a research university in Canada with 20 years teaching experience. Write 20 ideas for learning activities that correspond to different levels of Bloom's taxonomy for a second year comparative government course. So there's three things highlighted there. You have given chat gpt a persona. You have said chat gpt, please act as a political science member political science faculty member at a research university. You've given it some sources. You've said, you know, use Bloom's taxonomy. And then you've been specific with it. So for a second year comparative government course. And then chat gpt is going to generate 20 ideas for learning activities that correspond to different levels of Bloom's taxonomy. However, if you were to just write, for example, write 20 ideas for learning activities that correspond to different levels of Bloom's taxonomy, you would still get 20 ideas out of chat gpt. But because we gave it a persona, we said this is to do with political science. And because we were specific to say this is for second year comparative government course, the output of this prompt is likely going to be substantially better than a more basic prompt. I'm going to give you a really, a really loose example of exactly that, because for a while now, we've been building a tool that helps students go through critical analysis. And initially this was built seven years ago as a very, very basic tool. I'm just going to, can you, would you want to stop showing your screen? Because yeah, perfect. Thank you. This was built seven years ago as a, I'm on the wrong tab, as a tool on a law prof's website. And this was a very basic self-socratic exercise, whereby the questions that the students were asked were always the same. So this is actually step two of this particular thing. It says, thinking about your issue in the context, step one, by the way, is basically name an issue in five words. Step two is thinking about your issue in the context of these articles. Think about how that issue translates when applied specifically in the context of video games and law. Now please reframe your issue so that it specifically refers to video games, do so in five words or under. So a couple of points here, really. One, this is the first real question. And these five links up here all exist because the law prof, I describe with love, because I've worked with him very closely for the last seven years as an absolute lunatic. He, for the last seven years, has been creating what is called news of the week posts. And in those news of the week posts, he outputs quite literally hundreds of links that have happened around the law world and video games. And these five links are of a group of 35,000 at the moment, links that have been generated over time. And these links basically provide an extra bit of context for the students as they're going through this. So the student will go through this exercise. It's actually a five question exercise. And they'll go through it and they'll be provided five links that they can or they don't have to use, but they can use to help them reframe and then eventually come up with an essay. Now this has worked. This has provably worked. In fact, this tool has been mentioned multiple times by the prof students in the evals. And even just with comments that the students have sent to the prof over the years that the students have graduated and are still using this tool subsequently, which is kind of great. However, it's actually really limited because the questions are always the same. Regardless of what the student enters, the questions are always the same. And one thing that we've heard is it would be nice to have more contextual questions. So if you think about it, that's kind of a great use case for chat GBT because it will generate information based on something that you put in. And this was my first go something like six months ago. It wasn't great. This was my first prompt from many moons ago. Write a Socratic method exercise for a law student around the topic of video games. So I should be made up of five questions. And you know what, these five questions aren't bad. They're not all terrible. But the thing is, this isn't a conversation anymore. This is not a chat if you like. This is just his five questions, which is essentially exactly what we did here. So this is no better at all. However, over time, and this is really important to remember, over time I iterated, I must have gone through dozens of prompts and I'm not going to show you all of them because some of them are embarrassing. But the idea is that you will get better at doing this. Don't put in a prompt, get a response and go, yeah, Tool doesn't know what it's talking about. Iterate. This is quite hard as a human because what I'm asking you to do is go, maybe the human doesn't know what it's talking about and assume the tool does. I personally treat chat GBT just like a really green intern who has loads of energy and loads of capability but needs treating with respect and given huge amounts of context to be able to get the best out of it. So this now is going to be, this is the end result of weeks worth of trying to get better and better. I'm not going to ask you to read it all, but I'm going to break it down. This first two paragraphs is the Wikipedia definition of the Socratic method. These are the five questions that we asked actually in the Socratic method website, all five of them listed out. This is the problem statement. However, these two questions, these questions are too generic. Rich, can you zoom in a little bit? Oh, absolutely. Sorry. And just two minutes. Yep. Using this information and other knowledge, you have how the Socratic method works and thinking specifically about the pedagogic value of the Socratic method of post-secondary education, your task is to ask a series of no more than five questions one at a time, which helps the student go through a Socratic method exercise. Your first question will be name a digital world issue that interests you in five words or under. When the student responds to your question, you will then formulate a follow-up question that asks the student to name or broadly think about their topic. Okay. And that goes on and on and on. Your role is of the person asking the questions. You should not answer those questions. So please only provide the questions one by one. For each question, if you are able to provide several links to legislation, articles or relevant material, your reply to this prompt should be the first question to ask the student, and then subsequent replies will follow the above logic. Question one, name a digital world issue that interests you in five words or under. That's the question I asked it to ask. Loopboxes in video games is my response. To which it responded, thinking about loopboxes in video games, how do you think the current legal framework surrounding consumer protection and gambling applies to this issue? Consider aspects such as age restrictions, transparency and the potential for addiction. I didn't put any of this. I literally wrote five words, loopboxes in video games, and it came up with this fantastic question. It also gave me these three links, the legal status of loopboxes in different countries, the impact of loopboxes in young players. None of these links exist. These are all fake. It made them up. However, because that's kind of what chat GBT, however, all of these articles do actually exist on the websites. They just don't exist at these links because these links have been generated. Chat GBT treats links like they can be something that is generated, not as facts. We'll get around to that in a second. Children shouldn't be exposed to gambling in video games in my response. And then building on your concern about children being exposed to gambling in video games, how do you think your video game industry should address the issue of loopboxes to ensure the protection of young players, consider possible regulatory measures, industry self-regulation and parental control options? And this whole exercise is now tailored to exactly what I was talking about as a student in this. This is so powerful. And it's something that we're building a WordPress plugin for that isn't going to be restricted to video game law. It's not going to be restricted to law at all. It can be something that can be used by any faculty, really, or department where critical thinking is very, very much necessary, which I think is most. Okay. Lucas, do you want to just switch back to your slides? And that basically leads us to approaches to prompting. So realistically, you've seen an example of what a prompt looks like. You've seen an example of what a bad prompt looks like. And you've seen an example of what sort of advanced, and I'm using very much their quotes there. But let's now break it down a little bit more into what other types of prompting you might have. And for that, it's Lucas. Wonderful. Thanks, Rich. I love seeing the complexity of what you can do with prompts. And that's one of the reasons we brought that in here. I think when we think about prompts, there's a lot of articles out there that I feel underestimate the generative AI tools. There was one in the New York Times about students not able to write successfully entrance exams to universities, but with prompting, it could. Prompting is like coding. It's very complex. And here's an example. This is the Atra Dapura Spatial. It was an award-winning Gen AI image created with mid-journey. And it was quite controversial, because although it won a photographer or an image award, it wasn't able to get copyright for an image, which is a whole other discussion. But what was interesting about this is to prompt this image, it took the person more than a week. So when we're thinking about prompting, it's worth thinking about how much effort we can put into the prompts that we have. And before I jump in, I also wanted to question this term prompt engineering. Can I get a thumbs up if you've heard of prompt engineering before? Just using your virtual thumbs. So it looks like, you know, a couple of you have. Prompt engineering is the idea that we can engineer our way through these prompts. But I think we need to question this a little bit. Yes, engineering and tips are one way of prompting, but in an academic institution, we have so many different ways of prompting. How does a philosopher prompt a language model? How does a psychologist prompt a language model? How does a librarian prompt a language model? So all of us, and then how do we personally prompt these tools? I think as we're getting into conversational prompting, all of us are finding tricks. We're finding ways of doing it. Some of them are prompt engineering. Some of them are just how we're approaching prompting. So let's put a couple tips into our toolbox. And I wanted to share this effective prompting list. And this is from a UNESCO document that I've linked in the guide that I've shared with you. So effective prompts are clear, straightforward language. It's easy to understand. And we're avoiding complex or ambiguous wording. We are, as Lisa pointed out in the chat, we are speaking to a computer here. Number two is using examples. So rather than just asking a thing zero shot, we can add examples to the tool. And I use this quite a bit with learning objectives. So rather than asking it to generate learning objectives, I will put in a learning objective that's measurable, that follows Bloom's taxonomy. And I'll ask it to create similar learning objectives for a particular topic. Context, building in the context for a prompt. And then refinement. And I think in Rich's example, there was months of refinement in the example I shared with the AIR weeks of refinement. So this being an ongoing conversation. And finally, ethically, what are we putting into these tools? Are we safeguarding our students' privacy? Are we safeguarding our own privacy? What about copyright? How are we thinking about the copyright? How are we thinking about the bias in these tools? I'm not going to share these tips now, just because I'd like to get into some approaches. But I have put these all in the guides. And they're just kind of ways that you can add to your prompting toolkit and kind of go one step up in terms of your prompting. But let's look at a couple of approaches. And what we've done is we've organized these approaches by different things that faculty, students, and staff are doing at the university. The first one is developing teaching material. So as many of you indicated in the poll, teaching material is one thing that we can use these tools for. That could be developing case studies, developing quiz questions, et cetera. So what prompts can we use to do this? And one prompt that I noticed many of you have used already is persona prompts. And this is when we give the tool a persona in order for it to call on more specific and contextual data. So I have an example here. You're a political science professor with 15 years of experience as my persona. And I'm going to demonstrate this now for you just to show you kind of the quality of what it can capture. So I'm going to open my GPT tool here. And I've copied a couple of these prompts on this table just to make it a little bit easier. So I'm going to take this prompt. And I'm using GPT-4 here, which is going to get a little bit of better output, but it costs a little bit of money. So you are a political science professor with 15 years of experience. So I added some detail here. You can also add characteristics. So I could say a cynical political science professor. I could say a wise. So I can kind of play with that a little bit. And then I say create three essential questions for a first year political philosophy course. And let's see the output here. So certainly it's going to go over a little bit around states of nature. And it's going to come up with these questions for me. And again, each time it does this, it's unique, which is interesting. I was practicing this before the session, and I got very different answers. And only with an expert set of eyes, or at least someone who knows about the state of nature, can we know the extent that these are correct or not? So to what extent are contemporary notions of political right? And let's change it a little bit. So now I'm going to say you are a history professor. I'm not even going to bother spelling history, right? It should guess that. And I'm going to say for a first year history of political science course focused on the state of nature. And let's see how it's different. So now we're linking it to historical events. So depending on the persona we use, we can get different results from these tools. So personas can be objects. They can be people. They can have characteristics. But again, this is going to pull on far more complex specific data from the tool. The second example that I wanted to share with you is another useful way of creating teaching materials. And that is acting as an outline expander. So to act as an outline expander, what the tool can do is it generates an outline, and then you can allow it to have it go in and to expand on any of the bullet points. So imagine you're creating a case study. You can have a basic outline of it. Then you can select one of those bullet points and go a level deeper and so on. So let me demonstrate this one for you now. And an outline expander, you're almost making it like a program of sorts. So I'm going to say act as an outline expander. And let's give it a try. And I'm using the state of nature again, just because I know a little bit about this from my undergrad. So I can tell a little bit whether it's hallucinating or making things up. So it's going to act as an outline expander now. And it's going to give me a summary essay outline on the state of nature. So it's going to go over Hobbes view, Locke's view, Rousseau's view, and the past iteration I did, it did modern interpretations. And when it's done this, it's going to ask me which bullet point I want to expand further. So I'm going to do critiques from feminist post-colonial perspective. And now it's going to do another set of bullet points specifically around these critiques. And I don't need to demonstrate the whole thing, but you can keep refining and kind of specifying using this outline expander tool. The last example I wanted to share with you for refining teaching materials is to have the tool evaluate itself, either as a persona or just evaluate itself. So I gave an example here of a recipe. I create a recipe, and then I ask the tool to evaluate the recipe you have just shared based on ease, deliciousness, and clarity. Now rewrite the recipe. And what the tool is going to do based on that is it's able to use those criteria and make a better quality output. So again, we're using the prompts to improve our output with the tool. I'm going to give you a chance in a moment to play with these prompts, but I want to go through a couple more examples. And this is a little bit more like Rich's example where we get really into complex examples So this one is from Molek, 2023. Molek is a professor at, I'm forgetting the university name, but he does quite a bit around different prompting. And you'll see this prompt has a role in it. You're a quiz creator of highly diagnostic quizzes. You'll make good low-state tests and diagnostics. And then it's actually getting the tool to ask him questions to refine the prompt. And I've linked this into the guide. If while I'm talking, you want to go in there, try this in your GPT and just see what the result is. Give it a shot. And here's another complex example. So again, you're a political science instructor, design 10 questions, multiple choice, test students understanding, then write feedback to the students about correct and incorrect options. Link the topics together in your feedback to help students connect ideas together. So lots of things going on in this prompt. Again, copy and paste this in, give it a shot, and see what the output is. So what I'm going to do now is I'm going to give you a chance to apply one of those approaches that I just used. So we've looked at persona approaches. We've looked at outline expander approaches. And we've looked at reflect on reflection approaches. I've added all of these to the guides. And what I'd like you to do now is to go through and give one of these prompts a chance or give it a shot using your GPT. And I'll just give folks two minutes to do that now. So just kind of just getting into the spirit of playing with these prompts. You don't need to share anything back, but just getting into the exercise of playing with them. And then I'm going to end off with one more example of types of prompts you can use, and then we'll open it up for discussion. So just two minutes on it. Give one of those tried. So we looked at outline expander. We looked at reflect on reflection. And we looked at role-based prompts. And in Miras, is GPT understanding of prompts evolving? For example, is it possible that a prompt that is working fine now? I would say yes, Amir. These tools have been changing. And how they respond to prompts has been different. I would also say that one of the challenges here is they're also unique. So how it responds to my prompt today might be different than how it responds to my prompt later today. Which if you're used to doing programming or anything with computer, this is quite challenging. And can she mentions yes, people reported some performance degradation with GPT 4 over time. So I'll just give you one more minute to kind of play with those prompts. And then I want to introduce one other type of prompt before we kind of open it up to the group. All right, I have a seven-year-old daughter and I find my time is very flexible. So that was a very short one minute. And I'm going to jump in and go over another type of prompt. Just because of time, I'm not going to go through all of our prompts for learning, but we've included a couple different ways of prompting for learning. And we have a video of our session last week where we looked at specific prompts to help learners do tutoring, et cetera. Also, we'll be sharing these slides after. But what I did want to share with you is what I think is one of the most fascinating prompts. And it's such an interesting prompt that we might share within our discipline so that learners can go home and practice it. Getting GPT to play a game with you. So I did this game and I'm not going to demonstrate it now. I want you to play a game with me to teach me how to identify logical fallacies. And what GPT does when I add that prompt is it starts giving me logical fallacies and having me identify which is the logical fallacy in that case and then giving me feedback based on your answer. So really the sky's the limit to different games you can play with it. And again, I think as a way of engaging students in different disciplinary areas, this is really powerful. And I'd like to demonstrate with a whopper of a prompt. This is from a history professor. And again, I've shared the link to it online. He has developed a complex medieval role play to help students understand area of history. And I'm going to demonstrate this now with GPT for you. I'm really excited by this prompt compared to many that I've seen. So I'm just going to share GPT with you again. And I'm going to grab that prompt. You can find it on this worksheet. So this is the history simulation of prompt. And what you'll see in there is you'll see different roles for the prompt and you'll see a real complexity in terms of context. So we're asking to play a role play game of educational history for university classes. And he's kind of set the role play in 1348 Paris. He's set some challenges with it. And he's set some complex gaming rules. So let's try. So this is a medieval plague simulator. It's now created and it's starting with this. And again, this is going to be unique every time you do it. It's going to give me my current status. It's going to give me an inventory. It's going to give me forces in Paris to be careful of. And then it's going to give me a set of commands. So inventory, diagnose, et cetera. And it's going to start with turn one. So what do you, what should I do? Should I approach the person to diagnose them? Should I engage the merchant to see what he wants? Let's try number one. So now it's going through the turns with me. Now what are my treatment options? Do I want to administer syrup? Do I want to perform an exorcism, et cetera? And it's going to keep going with the turns. So, you know, give it a try. Pace that prompt in if you would like. But what it's done for me, and I think what it did for a few of us is realize the complexity of these tools and what some of the opportunities might be in presenting our students home with a prompt to try as a game. So I'm going to stop there now. And as promised, this has taken us longer than 20 minutes to do. But I'd like to turn it over to Judy now who's going to run a discussion with us. Please go ahead, Judy.