 So it's just, yeah, so welcome to our prompt fund. So we are going to be spending the next two hours, two and a half hours, learning a little bit about prompt and for those of you who don't know what prompt is and you still sign up, then we will give you a introductions on what the prompt is and then we will do a lot of practice and exercise and get used to get to know what a prompt is. So I would like to introduce who we are. Here on who we are and I think to, okay, I'm not touching anything. Introduce who we are. No, before that I would like to make an announcement. I am facilitating this workshop for my home in South Vancouver, also known as the traditional and rural land of the Miss Graham Slava Tooth and Coast Salish people. And this is going to be a slightly different than a knowledgement. I would like to start with saying that a lot of decorations that we make or we meaning artificial intelligences, computing, data science people. We make decorations saying that when we advance using tools such as artificial intelligences, we are doing it to improve human good. We really want to use the tools to advance mankind. So the language like this has appeared on many of the decorations. But in 2020, a group of indigenous community get together and then they put the petition paper out. I have not had the chance to finish reading the whole paper. It's about 200 pages long. And one thing that really strike at me, something that I learned is for many indigenous community around the globe, man is neither at the height or center of creations. And so when we are using tools such as artificial intelligences and computing science and data science, we are really trying to remind ourselves to make connection with each other, with our nature, with winds and rock and plants and animals. This is only when we pay respect our nature, then we can be a part of the creation. So I would like to remind us all and think about how we can use prompting in this workshop and what we can do to help us foster relationship with each other and with nature. So who we are? We are today on this cross-team of staff from CTLT, Lucas, Rich, Manuel, Joe, Marie, Nova and myself. We care about the impact of generative AI in teaching and learning. We are here to learn with you. We are here to share something that we learned in the last few months. We are still learning. I am still learning. And we also invaded a Parana Khattah. She's an assistant professor of teaching and computer science. She's also going to be sharing something that about prompting or generative AI with us. So somehow skipping, we have a block and on the block is going to be some resources that we is going to stay there for a longer term. And the link to the block is on in the chat. We have two Google Docs. One is for us to all ask questions. So we will be monitoring that questions doc there. And then we also have a second Google Doc. It's more on some of the instruction on activities. We are going to have a lots of activities throughout the two and a half hours. And it's some of the instructions so you can follow along there. Feel free to ask questions as we go along. We will still be monitoring the chat and mute. Please stay muted because there are lots of us here. So it would be nicer to minimize distraction when we all mute ourselves. And thank you. And then so again, today we are going to focus on prompting and prompting in Generative AI Tours and how we can prompt more effectively in the teaching and learning context. Over to you, Wich. Thanks, Judy. Everybody. So this quote, which I'm going to read, I've sort of ever so slightly paraphrased it but really not very much and I'll show you what happens in a minute but it seems certain that this technology will cause something of a revolution in the availability and use of complex task completion in the world outside of schools. Current research indicates that there is considerable agreement that it should also play an important role in school instruction. But viable school roles will not be established without finding solutions to many problems, problems of philosophy, problems of curriculum and methodology, problems of design and school management of the technology themselves. In the belief that solutions to many of these problems should be worked out in the actual classrooms, a small band of local teachers and myself began several years ago to explore classroom uses of this technology. So it kind of sounds like that might have been written about Generative AI or maybe large language models and probably written in the last six months or nine months with some research paper. But this is actually the abstract of something that was written in 1976 and this technology is actually calculators. And there is several trains of thought and there are several parallels between calculators and use of Generative AI and large language models and other such AI pieces of technology between calculators and them. And some of it that in 1976 and a few years before then and a few years after schools were very wary about the introduction of calculators in classrooms. There was some thoughts that basically it could ruin math education. Like kids are not gonna learn the math anymore. They're just gonna cheat using these calculators. And those arguments were starting to see again when in 2023, when we're talking about Generative AI. And I'd like us to remember that I think most of us here will have gone through school either around 1976 or close to 1976, one way or another and we can still do math. We may not be brilliant at it, but we can still do math. Education wasn't ruined. And I think we're still learning that process. It took schools several months and years in fact to change the way they teach, especially math education and change syllabus, for example. So the different syllabi that they use. And I think it's quite possible that if you haven't seen about a third of you have told us that you haven't really used much LLM stuff here today that it's quite possible that over the next little while you will start thinking how your classroom experience and the way that you help instructors can change over the next little while. We've been here before we calculators. We'll be here again with something else and we'll get through it. So the next slide, which I have access to here is about privacy. And this is something that is, if you've been thinking about privacy, which I hope you have, then it's something that we need to get out really early in today's workshop because at UBC these tools, none of the tools that we're gonna be talking about today. And there's many of you will have heard of chatGPC but there are others, but none of them have been through the privacy impact assessment at UBC. So what that means is you're still able to use them but you have to be very careful around about your use of them in classrooms and especially when it comes to using anything that approaches personal or private information. So one really key point that we'd like you to remember chatGPC and several other GNI tools reuse the prompting data. So when you type stuff into chatGPC, whatever you type in, it's then used itself, the tools, the company behind those tools uses those things that you put in as part of the learning process to improve those tools. Additionally, several of them have, let's call it, let's be generous and say they are ambiguous in terms of service. So please be cautious about the information that you feed into the tools. Joe is gonna speak later about privacy specifically but we just wanted to get this out really early in terms of making sure that you are cautious whenever you're going through this next two and a half hours about how you use them and what you say and what you do in them. Who's next? I'm just going to go through the agenda. We have two and a half hours together. We are going to introduce what generative AI is. Based on our survey, we have about half of the respondents of the survey, the survey that we sent out last week that has no very little about generative AI or prompting. So we're still going through some introduction introduction to prompt improving, using the prompt to improve your output. We are going to introduce a few different prompts that there will be continuing, refining and interacting and that the end towards the end will work and then there will be activity throughout. And so then we hope that we will have these which these running goal by the end of the prompt, promptify. So overall, we would like to that if you engage in this promptify, you will have a better understanding of the value of prompting for improving the compability of generative AI and large language model. Specifically, we are going to hope that you will be able to identify the basic operation that is prompting in common generative AI and large language models, such as CHAPGPT, identify the current practice for developing effective prompt and be able to write some prompts for the prompt approaches, the many different approaches and patterns that we are going to share in this workshop and develop a set of prompts for your own practice or teaching that combines prompt approach and patent. So in the earliest survey that we sent last week, we asked you what the email that we sent to you last week, we asked you to think about, take a moment to think about what would you like to achieve by the end of the session? And about a third of you actually responded to us and filled out the survey and here are some of the examples that some of you share. We would like to create some teaching and learning materials such as a lesson plan and assignment and associate rubric, case studies, maybe using generative AI to help you refine your course outline, your syllabus, syllabi. And one of you say that you would like to use it to understand students' thought process. I would like to learn and chat with you how you plan to do that and maybe preparing for your next meeting, your agenda, planning and conference. One of you wants to plan a conference using chat GPTU or generative AI or other tools such as document, spreadsheet, lists, code, et cetera. So that's some of you contributed to your goal but I would like to give everyone maybe another minute. Why do you think about why you are here today? What's your goal? Let's think about what you'd like to achieve by the end of the session and why you're there. Okay, a minute quiet time. I am checking on my phone. Okay, one minute is up. So it's important to keep that in mind as we are doing many promising exercises. Easy to get distracted but it's okay to get distracted too because we would like to play along as we learn. Just wanted to give a survey, share some of the results from our survey. So one of the questions we asked is how much do you know about generative AI? And as of this morning, we noticed that out of the 58 responses about one third say no very little or only heard about it. Some of you has already been using it, playing around with it and some of you actually start using it in your teaching and your work. And then what interests you the most about learning generative AI today? Just to get more introduction, get some prompting patterns and hearing from others, people sharing your technique. Thank you. And then we also asked the questions about what are some things specific that you would like to learn? You list a lot of topics. There's so many, so many of them are beyond the scope of today's session which is focusing on prompt and prompting technique. We will continue to look at your request and we will develop more workshops hopefully in the coming months but no promise yet. As the landscape of generative AI is changing so rapidly. So no promise but we are going, the promise is we are going to look at your, what you'd like to learn and we will develop something more in the future. So next, I would like to introduce Veranda Kahata and she's going to share with us what she knows about language model. Okay, thank you, Judy. Hello everyone. So in my part, what I'm going to do is I'm supposed to tell you what is generative AI and I'm going to try to do it in the next 10 minutes. So you might have heard different names for this generative AI, that is people call it generative AI or large language models and the tools and techniques used for that are deep learning or transformers or some people call it like writing assistant and so on. So you might have heard many of these names. So before I tell you, so we will just focus on this large language model aspect of generative AI, generative AI. And I'm going to start with asking you a question. If I tell you to complete this sentence, I am. So what is your task? Your task is just to complete this sentence. You can say anything like happy or hungry or whatever. So it would be great if you write your response in chat and so that we will know whether you're paying attention or not tired, okay? Good, you're tired Ryan. Okay, perfect. Thanks. So yeah, so there are many, many possibilities for this, right? So I am, I can be happy, hungry for and if I ask you, so these are all reasonable responses. But if I ask you to also provide the probability associated with your response, something like how often the word happy occurs after the sequence I am, then it's a little bit hard for human beings, right? That said, we have access to the full internet and we can calculate these probabilities. Like we just get data, all the texts that's available on the internet and we figure out how often the word happy occurs after the sequence I am and then we assign some probability to it, okay? And so basically that's the idea of a language model. So what does a language model do? Language model just computes these probability distributions over sequences and these sequences can be sequences of words or sequences of characters. And actually this idea originates from Claude Shannon from 1950. So Shannon came up with this game of this predicting next word and he called this game the Shannon game and he played this game with his wife and he just came up with these incomplete sequences and asked his wife to complete the sequences and calculated probabilities. And we are still using the same idea just that the scale is much, much bigger compared to that time now. And here is an example of a language model. Most of you must be familiar with these voice assistants like Amazon Alexa or Siri or Google Home and so on. And when we ask a question to these assistants, voice assistants or whenever we talk to these assistants what are they supposed to do? They are supposed to understand what we are saying and respond and create appropriate response to your question. So if I say something like I have read this book, then this other sentence, I, the organ I have and color read this book, they sound exactly the same, right? So how does this voice assistant know what you are saying? Is it this being utterance or is it red utterance? And if you have a good language model, if you have a good model of language which tells you which sequence is more probable then this voice assistant is going to do a good, better job, right? So if these two utterances sound the same but if you have a good model of language it would know that I have read this book, this utterance is in green. It has a higher probability compared to organ I have color read this book, okay? And so that's how a language model is being used traditionally. Now, what's the simplest way to do this? So basically what's the idea? You are given a sequence of words and what you want to do is you want to predict the next word. So given this current sequence of words. So a word which is very, very likely given a sequence of words. And one easy way to do this would be something like finding co-occurrences of different words. For example, if I want to figure out in smart compose in your Gmail, for example I have typed something like wanna grab dinner next and I want to predict the next word. Then I can just figure out in my, I can take all the emails from Gmail. I count how often the word week occurs after next and that's going to be the probability of this, like week that's going to be my prediction if this has the high probability, okay? So this is one way to calculate all these probabilities and predicting next word. But the problem with this approach is that we are only considering this one previous word and this is like a very, very simple model of language. If you want to predict the next word just consider the current word and based on the current word figure out which word is likely to occur next based on the co-occurrence counts in your data. So, but as I said, the problem with this approach is that language is usually very complex and there are these long distance dependencies in language. So if you are given a sentence like I'm studying law at the University of British Columbia Point Gray Campus in Vancouver because I want to work as a what do you think should be a good completion here? Can you write in the chat, right? Lawyer or legal counsel or something like that, right? But when we want to predict this word it's if I just have this previous context word ah, then it's not really we cannot really predict it properly, right? So to predict it properly we need to access this information which is far away from this current position which is like more than 15 words away from this current position. And a simple model like this Markov model it cannot really capture that. So enter attention and transformer models. And because although, so if you look at any large language model that are available right now all of them are based on this technique called transformers. So these are all transformer models under the hood they are using this transformer based architecture. And at a very, very high level what's the idea these transformer models they are going to capture these long distance dependencies they are going to capture all these complicated relationships between words and how they are interacting with each other and all these complex patterns in text. And how does it do it? So I'm going to give you another example here. I asked this question to Chajipiti a couple of days ago in one, two, three words what does the pronoun it refer to in the sentence if the baby does not thrive on raw milk, boil it. So for humans we know that if we want to figure out what does this pronoun it refer to it has to be raw milk and not the baby, right? Because I mean, there are these two possibilities there is the baby and then the raw milk but from our common sense we know that okay we don't usually boil babies and so it makes sense to that it refers to this phrase raw milk. And so Chajipiti was it gave me the correct answer. And basically when we try to figure out when we try to answer these questions what happens is that we selectively focus on specific things in text. So if we have this word it and our job is to figure out what this pronoun it refers to then what are the important things here? There are these two noun phrases the baby and raw milk, they are candidates for it and then there is this action of boiling. So raw milk and the baby they have some attention when humans process text like that then boil is a verb because boil is a verb then the baby is like the subject of thrive and then the raw milk is object of thrive and so on. So we pay attention to all these things and in transformer models something similar happens. So they have something called self attention and the idea is very much inspired by how we process text that when we are reading text we selectively focus on specific parts of the text which are relevant and we ignore parts which are irrelevant. And at a very, very high level again in this particular case this is the, we are asking this question to Chagypiti so it will pay attention to all these things. So basically we are also giving these instructions that we want the answer to be in one, two, three words then we are interested in figure out what this pronoun it refers to and then there are these other candidates like this is the query and then these are the verbs. So here I'm just highlighting things where the model pays most attention to. So this is like green is high attention, blue is significant attention, then the words like the and in they don't have much information because they occur everywhere. So the model pays low attention to them and so on. Now you might wonder how do we learn these weights? These weights are learned during the training process. So basically it takes the data from all the internet or full Wikipedia and everything. And basically it learns all these patterns, complex patterns in language and it learns all these weights. So which part we should pay attention to in text. Now you might be thinking that, okay if we are just trying to predict next word, how can we, how can chat GPT and similar tools they can solve, they can answer these complex questions. Now the previous GPT models, they were just based on these next word prediction thing, but in chat GPT and GPT four, they have incorporated something more. So they train, they still train the train a language model that is they are still doing next word prediction, but it's not suitable for the question, answer format that we are interested in. So they fine tune this language model on several question, answer tasks, okay. And it is called instruction tuning or reinforcement learning with human feedback if you want to search these terms. So these are the two main key things which made chat GPT so successful. And basically what is the prompt now? So prompt is this context that we are providing. Again, remember that in language models what we are trying to do is we are just trying to predict the next word. So the context and the instructions they are going to play a very, very important role. And so this context like this is the instruction one, two, three words. And this is the question that we are asking what does the pronoun it refer to? All this information that is, I mean that is basically the prompt, okay. So the context that we provide for this next word prediction is called prompt and that's going to guide our prediction, okay. So yeah, so here is like a proper definition of a prompt for generative AI or large language model is the seed statement. That's the basic statement that we are going to use so that we can control the output that's given to us by our generative AI. And that's all I want to say and I'll pass it over to Joe. Thank you. Thanks, Parada. So yeah, that all sounds very exciting but when we dive into using these LLMs we still have to consider that there are some real limitations. These tools don't really have a real understanding of what they're saying, right. Like they're literally predicting the next word based on probability. They can't do any common sense reasoning or anything like that. So there's a term that AI researchers have kind of coined called hallucinations. So if you see this term it's pretty much describing when a gen AI provides you a response that is very convincing. Like so convincing that you want to trust it, you believe it, you're like that that absolutely has to be right. But the problem is some of the times these responses are completely fictional, completely made up. It happens for quite a few reasons. Most obvious is probably like a limited data set. Yes, it has access to text on the internet but when you compress all that text and you start running the models on that text it can interpret things a little differently. So I've seen examples of this where you could ask ChatGPT for citations, right. And the citations that will spit back at you seem really real. There's a title, there's a URL but the title is completely made up and the URL might work but it might be linking to something else. So one way to kind of deal with that is by adjusting the temperature. It's kind of hard to do in ChatGPT because you have to call on the property of temperature and then tell it how to use it. But if you use the chatbot in Bing, so Microsoft's Bing browser, her search engine has GPT built in. They make it really easy with a slider to slide between a more creative answer or a more accurate answer, right. So more accurate is potentially more truthful. More creative is potentially more made up or more random. So like Verrata mentioned, the next steps to make these responses even better are by augmenting the data through human feedback or the instruction tuning. Trust scores can be added in or even just applying these large language models or GPT on purpose built data sets where you know that it has all the data that you're going to be asking for and then it can help you kind of transform the data through your prompt. So basically just to sum up hallucinations, it's best not to trust what ChatGPT gives you right out of the box, right. You have to have kind of an open mind to like know that it might not be completely true. And so it's better to use it when you're kind of brainstorming at first instead of just using it as like a factual response. So I just wanted to kind of highlight those because today when we start playing with prompting, you'll be able to, sorry, there's a question here. So temperature, temperature is just what the property is called in the GPT model for creativity. So think of it as creativity. So the higher the temperature, the more creative it is, the lower the temperature, the more deterministic it is or accurate. You don't really have to worry about that. I just kind of wanted to just mention temperature. So, okay, yeah, so think about that today and just remember that what you're reading might be completely made up. Next slide here. So the other thing I wanted to mention which Rich mentioned briefly earlier is about privacy. There's a few things we can do to mitigate risk here. Whenever you're entering in real data, make sure you can like do tricks where you like pull out names, institution names, student names, anything that's identifiable. And the models do a really good job at replacing those names with whatever placeholder you put in. So if you put like square bracket name, it will put that in its response. And so that's a good trick. And then the other thing on chat GPT is if you dive into the settings, they do have a toggle that allows you to turn off the chat history and also maybe not save it to their database. I don't know if we can trust that based on their terms of service, but it's still probably a best practice to turn that off if you're concerned at all. So in addition to chat GPT, which I think we've asked you all to sign up for today, there are a few other gen AI tools available out there. So I mentioned Bing, which is built into the search engine. Google has one called Bard, which isn't available in Canada yet. And there's Claude too, not available in Canada yet. And then I mentioned on the slide here, there's talk AI, which uses GPT, I think. And Lama chat, which uses the Facebook large language model. So number six and seven there, you can both use without an account. You might not get the same responses as you'll get with chat GPT 3.5 or four, but if you have any trouble logging in today with any of these, you can always just write down your prompt approaches. And as we work through the patterns, you'll still be able to get a lot out of it today. So that's all I have. So I'm gonna pass it over to Rich to kind of dive in and actually start looking at some prompts. Over to you, Rich. Thanks, Joe, and thanks everybody so far. We've talked a lot of theory and a lot of high level stuff. And for others, give you a really high level but excellent overview of how these tools work. So now that we have at least a high level overview of how they work and what a prompt is, the next step is to sort of see what these can do. I suspect a bunch of people here so far who have already used some of these tools haven't been particularly impressed with perhaps some of the answers that they've been given. And hopefully with the example that I'm about to show and hopefully with the sort of the preceding two hours worth of ways in which you can improve your prompts, then hopefully if you have had a bad experience in the past, you will have a better experience in the future. These tools are really powerful and because they're really powerful, it generally means you need to do a little bit of learning to get the best out of them. And that's kind of the point of this promptathon to help you do that. But we're gonna jump essentially to the end. I'm gonna show you an output of what ChatGPT in this case, ChatGPT 3.5 can do, but I need to give you a little bit of context first. This was developed for a law prof at UBC, but this idea actually has been around. We've been working on this idea for about six years. We made a first version of this and I'm just gonna quickly share my screen. We made a first version of this six years ago and this has been running on the law prof's website for six years and he's run this course every year, I think. And this tool is regularly talked about in the prof ratings in terms of the students are really grateful this tool exists and it's actually helped students. So the Socratic method for those who aren't fully familiar with it, essentially is a way to engage people in critical thinking. It asks questions and the goal really is to help refine answers or a statement based on those questions. Yes, it comes from Socrates from a couple of thousand years ago and the method has been refined since then. But this particular example on the Video Game Law website essentially walks students through six questions and it's very, very basic. Because like I said, we developed this six years ago before we really had any idea of what the potential of large language models could be and yes, live demos of course it's broken. But I essentially wrote, I think I had put two spaces in there which confused it. But basically this tool asks you a series of six questions and at the end of it, the idea is that it helps you refine your thought and then end up writing an essay. Now it's really valuable for lawyers because the obvious example here is if you have a position in a legal law case you want to try and think of what the other side might think, right? You wanna think of the questions they have. You wanna think of, say for example, you're defending a defendant and you wanna think of what questions the other side might ask. So the Socratic Exercise for Law students and lawyers in general is all about questions. And this whole idea of the Socratic Method exercise has evolved over the last few years and we have for the last few months have been trialing using GPT actually large language models in general but specifically GPT here in trying to get something that is useful out of it because the big problem with this exercise as it exists for the last six years is that there's no real context. All it does is sort of fire through loads of questions. It just fires through the six questions that predetermined, it doesn't really take into account any of the actual answers that the student puts in these boxes or anything like that. It kind of does loosely but very, very loosely and also pseudo-randomly but we wanted to see whether we could improve upon that and whether large language models could. So I'm not going to show you the prompt yet. We're going to get to that in a minute but this is the output and it kind of blew my mind and I'm going to forewarn you there's going to be a bit of a swear because it's relevant but once I showed this to the law prof and we'll walk through this in a minute it's a wall of text at the moment but we'll get through it in a minute. The law prof literally wrote, holy shit that is better than I could do and that was a real light bulb moment for a bunch of us because this genuinely could be the next version of the Zagratik Method exercise that gives context to the students as they're going through this particular process. So the first question here you'll notice is the exact same question that I had on the website a minute ago. Name a digital world issue that interests you and five words are under. And I wrote loot boxes in video games and because I didn't put a double space because my space bar hasn't broken on this map ChantGBT sort of fully understood it and it says, question two thinking about loot boxes in video games how do you think the current legal framework surrounding consumer protection I haven't mentioned anything about this gambling I haven't mentioned really anything about that sure loot boxes it's determined is about gambling consider aspects such as age restrictions transparency and potential for addiction. I have mentioned none of this at all I've literally written five words and one of them is a pointless word like a word you would ignore, right? And it's come up with this now to go back to Joe's point a minute ago especially to do with links these three links are actually made up none of them exist. However, if you go to each of these individual websites this actually goes to Esquire this one goes to double AP publications and this one I think goes to the BBC, yeah if you go to each of these three websites and typed in titles like this into their own searches you'll find articles that are actually about these things it's just these don't exist it's kind of a weird hallucination but anyway we'll skip past that for now. So it asked me to think about these things so my response was children shouldn't be exposed to video gambling in video games. So building on your concern about children being exposed to gambling in video games how do you think the video game industry should address these issues of loot boxes to ensure the protection of young players consider possible regulatory measures industry self-regulation and parental control options? I've only really given it 11 words so far and it's asked me to fundamentally fantastic questions given me some context and yes these are made up but we'll get to that later it genuinely has understood what we're trying to do, right? So I've answered that question by the way I'm playing along here as if I'm a student I'm sure I'm genuinely trying to make this whole process work and my goal here was to make find the right prompt but genuinely when I was doing this by this point I was fully engaged I was answering it like I was a law student like and much to the discontentation of my parents I'm not a law student but the point here is it genuinely engaged me. So reflecting on the responsibility of video game developers in relations to parents allowing their children to play games with loot boxes what are your thoughts on the role of parental responsibility versus the responsibility of game developers in ensuring the well-being of young players consider factors such as age ratings warning labels in parental education? Like this is basically John said, the law prof said this is essentially the perfect question for this because it's getting you to think of more than one side. If you're a lawyer you're probably representing one of these two sides whether it's the game developers or somebody suing a game developer and it's getting you to think about both sides. It's brilliant. Now I wrote some stuff that probably a law student would be very embarrassed about so let's skip past that real quickly. The question next up comes building on the challenges you mentioned regarding the jurisdictional complexities and the limitations of parental oversight. How do you envision the development of a clear legal framework that effectively regulates gambling like mechanics such as loot boxes in video games? Consider the involvement of lawmakers, industry stakeholders and the potential benefits and drawbacks of such regulations. It's again such a good question that I've hardly given you any information sure I wrote a couple of paragraphs here but it's just understood the task. Now it doesn't understand that is actually a word I shouldn't use but it feels like it understands the task and that is good enough in this context. It's a really powerful example and yes I do carry on going with this and I actually like I said like at this point I wasn't mucking about I was fully engaged like I really wanted to answer these things. So you'll notice here basically it asked me one final question and it says thank you for sharing your perspectives on the need for clarity regarding loot boxes aid ratings and protecting child's mental health in the context of video games. Your insights highlight the importance of age rating being an accurate reflection on the context of potential impact on players. It then asked me to write 100 word essay and then it says after receiving the essay I'll ask you several survey questions to gather your thoughts on the Socratic method and your experience with this exercise. Now you'll notice by the way I actually haven't finished this because I was genuinely jumping around the room because I couldn't believe how good this whole process was and there is another example where I do actually go through this and it does ask me a set of survey questions which again are a meta to this whole process. It then asked me questions like how I thought about the process as a whole the Socratic method, like what did I learn? What could be better? Like in terms of helping the student not only to think about the exercise but the meta part of the exercise in terms of the actual Socratic method exercise which is great. Okay, so how did we get here? How did we get this? How do we get this output? So I'm not gonna show you the prompt just yet. If any of you have received an email from me and there's a hundred and something people in this room, chances are some of you will you'll be very unsurprised to see that essentially the prompt comes out as a wall of text because that is kind of the way I do things. But this is how I started. Yes, this is today by the way but I didn't have a copy of it saved but I did have my whole methodology saved. So I'm not lying. This is just today's written out example. But anyway, so this is how I started. Write a Socratic method exercise for a course on video game law. It's not a good prompt. And it didn't give me a particularly good answer. It basically wrote down a scenario in terms of explaining what the Socratic method was and then it asked, it's nine steps, I remember rightly something like nine steps. And then each one of these steps basically asks sort of a question but it ends up being quite specific. There was no way a student could really go through any they could potentially go through these steps but they wouldn't really gain anything contextual. So that wasn't correct. So it didn't start, it didn't start well. So I learned and I was like, okay, well, how can I help this thing get better at what it does? So I then moved on to please produce but at this point I'm trying to be nice and asking it to do things nicely. Please produce a Socratic exercise for university students around the topics of legal issues in the video game industry. Each part of the exercise you should write should include several examples, questions the law professor can ask their students. So kind of we're getting there we're like, we're not really in the same sort of ballpark just yet but at least now we're actually getting a different set of questions, a bunch of questions. And the number of steps reduce, I think I've reduced a little bit. Yeah, reduce that to five because nine was just way too long for a Socratic exercise. And it also gives a few hints in terms of what the prof could do which is kind of nice, I think. But the goal of this really is to make the prof's job even easier by then not having to actually do anything because they won't need to be able to see what happens at the end really more than anything. Okay, so it's kind of better but not great. The next step, so what I decided to then do was give it an example. I was like, well, we're kind of already we already have like an example on our website. How am I given? What will I do if I give it an example? Will it learn from that example? So basically these are the five questions that we've asked on the video game law website for the last six years. It doesn't matter what these are for now it really doesn't. I'm just gonna skip past them. But I gave it the example of those five questions and then I said your role because I haven't really told it what I want at this point. I've kind of loosely alluded to it but I haven't been specific, right? But I said your role is to ask questions as part of a Socratic exercise. You will respond to my answers only with questions that are relevant to the answers I provide and Socratic in nature. I mean, that's not a great sentence but it kind of works it out. You should ask five or fewer questions in total. You're applied to this prompt and therefore first question should be, name a digital world issue that interests you in five words or under. Yeah, so here we go. It's getting there. It sort of understands it now. So now name a digital world issue that interests you in five words or under. Loot boxes in video games. That's my, this is just what I was going with at the time. Apparently it was in the news a lot. So how do loot boxes impact players experiences in video games? I mean, that's not a terrible question. It's nowhere near as good as the questions that we saw a minute ago, but it's getting there. They are gambling and thus have potential to negatively impact children. Kind of then what specific elements of loot boxes make them resemble gambling? It's an okay question, but it's not a great question for a lawyer because it doesn't really dig into it. It's really just asking about something very specific in this answer for the random and addictive, blah, blah. So what are potential consequences of loot boxes being addictive and resembling gambling, particularly when it comes to children and their wellbeing? Okay, that's not a bad question. What are some arguments in favor of legislators taking action to regulate loot boxes and video games, particularly when it comes to protecting children? Again, not too bad. So it was kind of getting it. It was kind of these questions, they're okay. So at this point, I'm probably, I mean, I've only shown you three prompts here, but I'm probably like 20 or 30 prompts in by this point. It's starting to understand, not understand, it's starting to get what I'm trying to do. So how do we get from here, which is pretty good to here, which is really good. Well, here comes a wall of text. I'm not going to read it all out because I value your own sanity. But what I'm going to do is give you a high level overview of what it is. These first two paragraphs are actually the Wikipedia definition of the Socratic method exercise. Almost, almost verbatim, but not quite. I had changed a few bits and pieces, but not very much. But what this is, is a concrete contextual understanding of what this is. This is what I mean when I talk about Socratic method. Then I do what I did a minute ago and gave it the five questions that I've asked in the past. And then I give it a problem statement. However, these questions are too generic. And this, by the way, was a really key piece. I hadn't really given it a problem statement before. Really, I've given it the examples, but I hadn't told it why those examples weren't good. So they're too generic because that's exactly what the problem is. And then this piece is really, this is what makes up the majority of the prompts. Using this information and other knowledge you have about how the Socratic method works and thinking specifically about the pedagogic value of the Socratic method for post-secondary education, your task is to ask a series of no more than five questions one at a time, which helps a student go through a Socratic method exercise. Your first question will be name digital world issue that interests you in five words or under. Just to point out here, by the way, I've told it what its first question will be here and there's still a lot more for me to talk about. So if we were thinking about the next word prediction thing, it needs to keep all of this context until it gets to here. Remember that, right? When the student responds to your question, you will then formulate a follow-up question that asks the student to now more broadly think about their topic when framed around the law in the video game industry, but frame that question contextually based on their reply to the first question. This second question should reference the answer to the student gave to the first question. The second question should reference the answer. Oh, I repeated it, there you go. Continue like this for up to five total questions with the ultimate goal of helping the student produce a 100-word essay about their topic. After you have prompted them for the 100-word essay in the final Socratic question and they have replied with that essay, you should ask them several survey questions about their thoughts on learning this way and what they liked and disliked about the Socratic method. Your role is of the person asking the questions. You should not answer those questions. So please provide, please only provide the questions one by one. This is because sometimes like in some of the 30 odd prompts that I got to, it ended up asking the questions and then immediately answering them. Like it just didn't give me the option. So I had to be very, very instructive, like only ask the questions for each question. If you are able provide several links to legislation, articles are relevant material. ChatTBT 3.5 got a fails here because it makes them up. Your reply to this prompt should be the first question to ask the student and then subsequent replies will follow the above logic. And this is the same example I showed you earlier. So there's a lot there that I've just thrown at you in the sort of 10 minutes there, but hopefully in general, the concept is have a conversation. Large language models are trained on language, human language. So treat them as if you were in the way that I have learned how to use these tools is treat the large language model as if it's a very green intern, like has energy and has some knowledge but needs really good instruction to get the best out of it. And this is, I think, I think as humans, the hundred and something of you here, if I was to read this out to you as a person, I think you would have a decent shot of doing it. And that is essentially what the large language model has done here. Okay, that was quite a lot. So I think what we need to do really now is have a decent understanding of where we go from the very first prompts that I showed you and then how we get to where I was at. So I wrote down the prompts, it's the definition, an example, a problem statement and a specific task, but is there something more generic that you can learn? Sure, I've shown you an example, but is there something that you can apply to your own work and whether that is teaching or helping teachers teach that you can use for prompting? And that is Lucas's job. Wonderful, thanks Rich. That was great. And as mentioned, I think Rich is really, it's such a rich prompt that he did and there's so much to it that in the next little while we're gonna unpack a little bit. And what I'm gonna do with you is kind of start walking us through what good prompting is right now and some of the things that I've learned about effective prompting as well as in the literature. Now, I wanna acknowledge that for many of you who have been prompting, I think there's a lot of intuitive understanding here. So I think intuitively, because this is natural language, many of you have an understanding of what a good prompt is. And so for the rest of the session, what we're gonna do is kind of work through some prompt patterns. And a prompt pattern is a different way of prompting and we're gonna follow the same rhythm throughout. So I'll introduce a pattern. I'll talk about potential uses in education and then I'll give you the opportunity to practice a little bit using that pattern. And a reminder, maybe John can share it again. I have two Google Docs you can use to follow this presentation and this workshop. One is a practice Google Doc. So you can just take a look at all of the different practice activities that we do and we're gonna do about six of them. And the second one is just a document for questions that you have. And mentioning quickly, like again, use any tool you can to do the practice. Hopefully you have GPT 3.5 or GPT 4 but try one of the other tools. I'm gonna do all my demonstrations with GPT 3.5. Finally, at the very end, I'll give you a little bit of time to work on that goal that you brought into the session. What's your big goal for prompting and what can you work on now with everything you've learned from Joe, Verada, Rich and myself? So what is good prompting? And maybe we can start just if folks wanna put in the chat what characteristics you can think of for good prompting. Again, assuming that many of you have already had experience doing prompt. So just what words come up to you when you think of good prompting? And I'll go over a couple now from the literature. So I see Brie wrote iterative, explorative, concise, specific. Wonderful, thank you. And specific with examples. Instructional, relevant, clear. So a lot around clear and specificity and those are my two first examples of good prompting. So good prompts are specific. And what you probably noticed is if they're not specific, first of all, you're gonna have to go back and kind of answer and make them more specific. You know that term garbage in, garbage out. Generic prompts in, you'll get generic responses out. The tool will be using Wikipedia for its prediction rather than using more specific data. Good prompts tend to be precise. So not only specific, but they're precise for the question. This is an interesting one. Number three, I find fascinating because I used to teach elementary school students and it's actually the same rule when you're trying to control elementary school students say in gym class, is good prompts should be positive. Say ask the AI, ask the model what to do not, don't try to tell it what not to do. So rather than using the negative, use the positive. And then I think you've touched on a couple of these. And again, it's gonna depend on your context. Some people like doing large prompts that try to get the whole answer at one time. And if you're doing something like programming the API, you're going to need those prompts. But I think for many of us, good prompts are ongoing. There's something that we're doing over a period of time, we're having a conversation. And doing a good prompt is taking me to get a good output, sometimes a couple hours now, just of going back and forth, changing tabs, figuring it out. And I think one of the great things about this space is that good prompts are experimental. And every time you put in a prompt, every time you put in a question, we don't know what's coming out of this. And computer scientists are saying this, this is the first time we've been working with code where nobody knows what the answer is. And every time you put in the answer, it's unique. And I think this is where some of the energy comes from. So let's talk a little bit about what can these models do? And I should say, what can these models do today? Because every day this is changing very quickly. And I'm sure many of you have been able to get these models or different models to do other things I'm not mentioning, but I just want to touch on some of the basics. So number one, it can answer questions and it can do tasks. So for example, I said generate five potential thesis statements for a 200 word essay in a first year philosophy class, each one concentrating on the topic of the environment. And it was able to answer this question or do this task for me. And I'm sure many of you have had these sort of questions answered. But secondly is transforming text. And I find personally, this is one of the most effective uses of GPT of these models for me is transforming different texts. And on the side here, you'll see I've taken a little bit of text. I use open textbooks when I'm doing this practice because of the copyright issue. So I use this open text and I put it into chat GPT-4 and I said create a glossary of key terms and it's able to transform these texts. But we can go beyond that. We can get it to create questions for us. We can get it to explain the texts like we were five. And I'll give you a couple more examples of this in a moment. Textual analysis, so GPT can analyze the text. I grabbed this text on the left from Romeo and Juliet Corpus and had GPT analyze the text for me. What attitude has been expressed in this section of Romeo and Juliet? So it got the attitude that the prince was exasperated and frustrated. It also got as ruling authority, he's giving a stern warning. So it really gets in there and thinks about analyzing the text. It can do a semantic analysis and different ways of looking at texts. And I'm just gonna give you a quick example of text transformation here for you now. So I have some texts right here. And again, this is an open textbook. So what are the mechanics of survey and plot? And I'm just gonna copy this out, just a little bit of it. And this is from an open textbook about movie construction and production. And I'm gonna go to a new chat. I'm just gonna use 3.5 here because it's a little bit faster. And just if you're new to this, you need to pay right now for four and it's a far more accurate tool, but it's a little bit slower. So what I'm gonna do is paste in this text and say, use this text to create three multiple choice questions that are aligned with Bloom's taxonomy. Taxonomy. And by using this sort of specificity, I found that the output tends to be better. So let's see what we get from this. So use this text to create three multiple choice questions. And again, 3.5 is quite quick. So if we scroll up, it's going to give us our three questions. And these should be aligned with Bloom's taxonomy, but we can always ask that if we're unsure of that. So textual analysis, text transformation. So how might we use this functionality in the university? And I've just given a couple of examples. I'm sure there's many more that we can think of. So learning materials. I gave that example, use the following text from X textbook and develop a short answer quiz questions that align with Bloom's taxonomy. So I think very quickly we can start using open text or text and we can start developing learning materials. And this applies to many other jobs that we may have, we may be doing at the university. Textual analysis assignments. So again, privacy considerations in mind here, having the students analyze a genre of writing expressed in a text or analyze something about a text. And then try to figure out what the AI may have missed. So what features of the genre did AI omit or not capture? And right now one of the value of these limitations that these tools have is we can use these limitations to create assignments. Simplifying complex texts and data. I'm sure many of you have done this with the AI already and that's taking something that's complex and asking the AI to simplify it. So rewrite this in point form to make it easily understandable by a first year university history student. So I'm getting specific now in who the audience is and I'm asking it to simplify. And if we're in faculty, we can think about this expertise knowledge that we have and how bringing this down to the level of students unpacking this can be a challenge for us. Accessibility, using the WCAG standards, I think I misspelled that, apologize. Analyze the extent that this text meets accessibility standards. So that is a little bit around text transformation and what I'd like to do now is I'd like you to give this a try. So you'll find this on your practice document. So I don't have to keep sharing this slide in a moment but what I'd like you to do is open up your model on the document that I've shared with you, you'll find a Wikipedia article. And again, Wikipedia is in, it's all open text so I'm more comfortable sharing it, paste in the first section of that Wikipedia article into your GPT 3.5, into whichever model you're using and ask it to do a couple of these things. So have it create a glossary, have it create a set of pre-reading questions, to gauge students' understanding. Ask the model to explain the article like you were five. See if you can simplify the article and even though Wikipedia is written in a particular tone, ask it to analyze the tone of the text. So I'm gonna stop sharing in a moment and I'm gonna give you about, let's go three minutes to do this. So just play a little bit, transform this text and as a debrief, I'm gonna ask you what your experience with that was. So please go ahead. So again, on the practice sheet, you'll find this problem, put it into your, put it into your GPT and take a look at what it's generating. And I see someone in the text, in the chats looking for a link to GPT 3.5. If someone could share that with them, that would be great. And again, if you're having problems getting onto this tool, just think about this. That might be another way to do it if worse comes to worse. How could you use this in whatever practice you have? So I'm just gonna time out three minutes here and then after that, I'm going to kind of ask around and see what you noticed when you were doing that. And this is kind of the approach we're gonna be taking for about the next hour is talk about something and then you can have a chance to do it for a little bit. And for those experts in the room, use this as an opportunity to push it even a little bit farther. Maybe you haven't done semantic analysis. Maybe you haven't had something check accessibility. Give it a try and see what it comes up with. And two more minutes on this step. Great, thanks for sharing those instructions again, John, again, you may wanna have that practice document open in another tab so you can easily move back and forth. All right, in another minute. All right, so let's debrief that hopefully at a chance and you can play with this as we go. Maybe put in the chat whether it was a good experience, what challenges you had with the experience, whether it was effective or not, just drop that in the chat now. So I have it scarily good, says yes. It provided a bit of a disclaimer, very effective says Catherine, wonderful. So thanks for sharing that and let's keep going and look at some other type of outputs before we get into prompt patterns. So another output that you may have experimented with and maybe I could get a thumbs up now using the reaction tool. If you have had GPT or a model you're using, generate code or tabular data for you. So I'll just get a thumbs up again if you've had these tools generate either code or tabular data. Okay, so I have one thumbs down, thank you, one X and about 10 thumbs up right now. So these tools are able to generate code. On the top right-hand corner, I asked it to generate a Reddit scraper for the R news subreddit using Python and it generated this code for me. It's also able to output a CSV. So on the left-hand side, I asked it to create a CSV around world population. It didn't have access to the data but it was able to do this fairly roughly. And then at the bottom left, I had to generate a rubric and it was more specific than that but it was able to generate a rubric. And I'm seeing in the chat, some of you have had different experience or it generated incorrect code. I know Rich has said he spent a lot of time correcting the code to generate. So I will say GPT4 is better but I like this idea that when you're using these tools right now, you always need a human in the middle. And ideally the human needs to have some expertise. For novices, I think this is really challenging because of the hallucinations that Joe brought up later. GPT is 100% confident and very maybe 70% accurate. So I'd like to now start and think about some of the ways that we can use to improve our response quality. And before I do that, actually let's just jump in and do that. So we have two section, three sections now. We're gonna look at response quality which is how we can get more accurate, more consistent and better output. Then we're gonna take a look at refining the output that we get and the third category is going to be interacting with the tool. All of the approaches I've talked to are talked about right now in preliminary research around you can look up prompt engineering or prompt patterns. And I've linked to some of these links on the blog. So the first pattern I wanted to mention is the persona pattern. And this is one of the first patterns that I picked up. And I imagine it's a pattern that many of you have worked with. Using a persona pattern involves asking the AI to respond to a prompt using a specific character. So or using a specific persona. So you could say things like act as persona X or you are persona X and perform this task. So an example here is you're an experienced physics faculty member who is knowledgeable about teaching and learning and physics pedagogy create a learning activity about force for first year university physics. So by priming the model and saying you're an experienced physics faculty member it's helping it use data that is going to be more specific and richer for your answer. And I'm gonna demonstrate the use of the persona pattern now and I've saved some of my patterns in this table. So I'm just able to copy them in. So this is what I'm gonna demonstrate now. And I'm gonna use GPT-4 for this. It's a little bit more powerful. It might get a little bit better of a response for us. So here's my prompts. You are an experienced physics faculty member who is knowledgeable about teaching and learning and physics pedagogy using peer instructor and instruction. And I used Eric Mazur who writes about peer instruction and developed this approach as a way to get specific with the model and generate the output that I want. Create a learning activity about force for first year university physics. And I'm gonna create that now. So understanding the concept of force with peer instruction and you'll notice it already pulled up polling software. So it's already thinking about clickers which is often part of Mazur's approach. And now we have a concept test which if you're familiar with peer instructions they use a concept test, peer discussion, students vote again, which again is another part of peer instruction. There's a whole class discussion and then an assessment and a rationale. So I don't know physics, I'm sorry. And this is where the expertise comes in but from my understanding of Mazur it did a pretty good job in unpacking the basics of peer instruction for this activity. So we can go really wide with the idea of a persona pattern. We can think about the model acting as a historical figure and then we can ask that historical figure questions. We can have the model act as an expert in a particular field and we can give it a task. We can even have the model act as an inanimate object or a terminal. So for example, act as a math terminal, act as a Excel spreadsheet. And we thinking about kind of teaching and learning we can think about interacting with the model as an example patient, a member of the public who is concerned about a particular issue. I'm gonna go over one or two more slides and then I'm gonna give you a chance to play with the persona pattern. The audience persona pattern is very similar to the persona pattern except now we think about the audience on the receiving end. So who is this output for? And the example here is I'm a first year university student struggling with study habits, explain to me the different ways that I can improve. So why might this be valuable in the context of a university? Learner support. So starting to think about what generative AI as a tutor might mean. Act as an experienced university tutor and ask me questions about the concept of force in physics. Understanding the learner experience. So again, thinking about and I tried this one out it was quite interesting. You are a first year university student in BC. I was specific about the province starting an introduction to biology course describe the areas that you're struggling with. Learning material development. Again, this idea that we can develop lesson plans, rubrics courses. You're an experienced physics faculty member who is knowledgeable about teaching and learning and physics pedagogy. Create a learning activity about force for first year university physics. So over to you. What I would like you to do now is one moment. Oh, I'm missing a slide. Here we go. So what I would like you to do now is practice the persona pattern and maybe try one of these. Try being a historical figure and asking it a question. Try being an expert in a particular field. Give it a task. Try being an inanimate object or a terminal and ask it to give it a task. And remember the construction you're going to use might be something like you are a or act as a. You can find this on the prompt practice sheet that I gave you. And again, I'm going to stop sharing and I'm going to give you about three minutes to try out your persona prompts now and John sharing the persona practice sheet again. So about three minutes on this one. Hey, just another minute. Great. And now what I'd like folks to do is to share what persona they asked the tool to or the model to do in the chat. And at the same time, I'd like to hear from a couple of folks if you want to kind of put up your hands, unmute yourself and share what your experience was with the persona pattern. Wonderful. And so I have I am Marco Polo with Barack Obama and Louisa mentions she hadn't considered using this idea of acting as a historical figure. So other personas that you did, wonderful. And you can take a look in the chat if you want to see some of the different personas. A sustainability expert, a TV remote. What is your favorite hiding spot? Thanks, Joe. Wonderful. So I'm going to take this opportunity now as you're sharing that to kind of open it up to the room and see what questions you have or what observations that you want to share with us if you're comfortable unmuting yourself. Yeah, Ricardo, please go ahead. Thank you. Just one comment that I just found it interesting which is like I'm trying to solve the problem here and not the problem, like it's more like an initiative. The interesting part about what device that you were giving was that I used during this exercise, I was using two different roles, right? The one that would be a service portfolio manager, another one would be more like a business person mentoring back to one person that runs startups, right? So it's the different feedback, how it came completely different and how well oriented in each of the role that they were taking. It was just more recommend than that. Amos, did you find the feedback was totally different based on the role you gave it? Exactly, exactly. And actually perfect, right? So I'm not saying that it's something wrong. Of course, I couldn't go through all the input that was given, but at glance it looks very interesting. So just want to comment on the importance of using a role, which is I believe is something that I have learned a lot today, yeah. That's fascinating. And I know for myself, one thing, and we'll talk about evaluation is when I'm creating something with it, I will say evaluate this as if you were an associate dean. Now evaluate this as if you were Ricardo from IT or, you know, and have different set of eyes looking at something. So very helpful. Thank you. And it's Ricardo. Pooja, please go ahead. Jumping in, Lucas, because you guys have really opened, you know, mitigated some of the fear I had. And so really wanted to acknowledge that you're the small exercises, so the latest one. I decided, so I'm not a faculty member just to say highlight, I'm a staff person in HR. So I just asked what would Barack Obama do in terms of leadership skills for retention and greater performance of teams. And of course I would have gotten generic in five points. So thanks for making it specific because otherwise I would have gotten volumes. I asked for five points. And then what was interesting is when I asked it to create a case study. Yeah. To demonstrate these leadership skills. That's what blew my mind because creating case studies is such a challenge. And to get that was, yeah. So I just wanted to quickly share that and acknowledge your support, all your support, you know, whether it was Berda, Joe, Rich, and yourself in slowly pushing us through through the kit and then making us jump into the high waters. Thank you. Wonderful, thank you. And it's such, it's so neat to be able to kind of open up this conversation. So all of the incredibly wise and intelligent people at the university now like yourselves can play with it and can see where it can go. Great. So thanks for sharing all of that. And we're gonna kind of keep looking at some of these patterns now. And I'll stop for some questions and comments like we just did as we go through. So we've looked at the persona pattern and the next pattern, I mentioned audience persona. The next pattern, we're not gonna practice this one. This is kind of a foundational pattern. And what I mean by a foundational pattern is it's something that you might wanna use over and over. It just generally generates good responses. So this is called query example and context. And this has been used a couple of times already in the context of education to refine critical reflections. But the way that this works is you start with a query. So what is the task you want the model to perform? So again, this could be your question or it could be an instruction. So write an essay, create a recipe, whatever you want. Secondly is an example. And Rich followed this a little bit in his example. What is an example of how you want the task to be informed? So perform. So share a clear and concise example showing exactly the type of answer you want from the model. I've used this a lot for learning objectives. So I can start by saying, you know, using, I like to use a different framework. So say, using D-finks approach, learning taxonomy, write a learning objective for, educate or sorry, for biology we'll say. And then put a specific example of that learning objective as the example and then write out the context. So this is a first year biology course. The objective needs to be measurable, includes your audience, your constraints, et cetera. And I find this is often at the foundation of different prompts that I'm doing. So here's a simple example. Can you summarize the key points of the Canadian Charter of Rights and Freedoms? For instance, the key points of the US Bill of Rights include freedom of speech, freedom of religion, right to keep and bear arms, freedom of assembly and right to petition. And then the context, and I got a little bit of persona in here. I'm a first year political science student in Canada, studying constitutional law and need to understand the fundamental points of the Canadian Charter of Rights and Freedoms for my upcoming exam. So let's give you a little bit of practice actually on the QEC. And again, I'm gonna do the pause. We'll pause for about three minutes. I would like you to try one of the following. Use the QEC pattern or the audience persona pattern you pick or a combination of them to complete one of the following. Design a rubric to assess a first year writing expository essay. Develop a learning activity with measurable learning outcomes for your discipline or create a CSV to tabulate and categorize an aspect of your work. Feel free as well if you wanna try a different type of prompt, but I wanted to give you some that you may wanna work on. So again, using either QEC, the persona pattern, the audience persona pattern or some combination, prompt the model to do one of these three or a wild card of your choice. And I'm gonna stop sharing and give you another three minutes to do this. And this will be our last of a prompting output. After this, we're gonna start looking at prompting refinement. And I'm gonna encourage you at the end of this to share your prompts in the chat with us. Just copy and paste them into the chat. I thought that was really interesting to see. And at the very end, we're going to ask you to then perhaps engage with the blog for this, which we have a section in terms of share your prompts because we'd love to see responses to each. So maybe you see a prompt that someone else has used that you think, oh, that's an interesting technique I've used or maybe that's relevant for me. We'd love to see folks engage with that as well. To keep this as a learning, constantly changing learning piece because LLMs change fast. So it'd be really interesting to see. So for example, if someone put a prompt in today, what the answer that they receive might be like in two months time. And I was just seeing, Kayla says I was able to ask the AI to create a rubric, but if I were asked to cite its source, how reliable would this be? It's a very good question and it definitely lacks in the reliability. However, it depends with chat GPT-4 because if you're working in your topic area, I do ask it to cite its sources and it has been pretty accurate for me some of the time. That's the challenge though, some of the time, some of the time it's completely not accurate. But what I like about your question as well, Kayla, is that it gets us to think about the next section we're gonna look at is, how do we think about these prompts? How do we refine them? How do we improve them once we've created them? And I expect, as these tools develop further, we're gonna see improvements on this sort of ability. A couple of extra additions there as well, Lucas. Perplexity AI is really quite good at citing actual sources, which is a different tool. Don't forget, I mean, yes, you can use one prompt, one LLM to do all of this, but I've actually experimented using different LLMs to do different things. And one way that I've done is I've asked chat GPT a similar prompt in terms of asking it to create a rubric. And as for sources, the sources weren't great. But then I put the answer that I got from chat GPT into Perplexity AI, I asked it to find what the sources might be and Perplexity AI did find the sources that were actually relevant to the answer. So that might be one way that you could do it. Thanks, Rich, that's great. I love that idea of evaluating and helped using different tools to evaluate and improve a prompt. So let's get folks now, if you don't mind sharing your prompt, your QEC or your persona prompt in the chat with us now, just copy paste them in there and let's see what everyone's thinking. Great, so as a professor in chem, I love the specificity on this, Antonio. And Antonio's using the BOPS model, which if you've taken the ISW, chat GPT is effective with the BOPS model. I was quite surprised by this. Steve mentions he had a terrific outcome from this prompt to GPT-4. I'm a third year biology student at university is taking a course in healthcare policy. Ask me a series of five questions. So getting into that specificity that Rich brought out that helped me learn more about the role and function of the Canadian Health Act. Ask one question at a time. And this is something we'll talk about in a few minutes. This can be very helpful using the one question at a time. Explain how to get rid of lactic acid in the body as if I were 10 years old, wonderful. So thanks for sharing all of those. Again, we'll share the blog with you after and we'll see what sort of prompts that you wanna share. So we've talked about a couple of ways to improve our responses. And what I'd like to go through with you now and give you some practice with is to refine both the prompts and the results. And so again, we're gonna do the same sort of activity but this time we're gonna think around refinement. And I think a couple of you mentioned this process is often ongoing. We're doing this throughout. We're often in this process of refinement. So prompting, refining, improving, prompting again almost like a conversation of sorts. Although I don't want to endow too much humanness on the tool. So the first pattern I wanted to share with you is a prompting refinement pattern. And it is called question refinement. For this prompt, what you can do is put this prompt at the beginning of a tab. And in GPT, you probably know this, there's tabs on the side if you do save the history. And each of these tabs has a little bit of a memory in it. So what that means is I can go back into there and I can ask around the same thing. I have a tab for recipes, for example, which will be able to remember what I prompted it before. So for this prompt, you prompt it as so. From now on, whenever I ask a question, suggest a better version of the question to use instead. And the example below is from now on, whenever I ask a question, suggest a better version of the question to be used instead. Why is the sky blue? And then the output that the model shares is what atmospheric conditions and scientific principles contribute to the perception of a blue sky. So once again, I'm gonna turn this one over to you. I'd love you to give this a try. We don't have to spend long on this, maybe about a minute. Prompt it, put in that prompt pattern and then ask it a question and see the extent that your question improves. So I'll just give folks one minute to do this one. And you'll find this on your practice sheet, right? And John's put it in there now. So from now on, whenever I ask a question, suggest a better version of the question to use instead. And you can get creative with that. Suggest a more creative version of the question used instead. Suggest a more evidence-based or scholarly question to use instead. Kind of plain with that different pattern. Wonderful, I love that. Kinchi shares, I always ask, is there a more pythonic way when while asking for code snippets? And Carolina Trapini, hey Carolina, nice to see, it says what is the correct way or should we acknowledge GPT if we use it in our teaching? This is such a good question. And I mean, we can open this to the room a little bit while we're doing this activity is to what extents or should we acknowledge this if we're using it in our teaching? Does anyone wanna jump in on that question? Or in our work for that matter? I think part of the reason no one's jumping in on that right away is it's hard. A lot of us are unsure. I'm unsure of what to do with it in my work, for example. I would guess in a classroom, being really clear about it is important and Major he says ethically, I think we should. There's been a couple examples. Carissa says it seems only fair to do if we ask students to do that. And Annette says, I wonder what chat GPT says about this question. There's been a couple of examples now where people in staff positions and faculty positions, there's been issues with this. So one of the ones was, I think it was Vanderbilt University. Three associate deans were suspended because they use chat GPT to write a letter about school shootings. And they credited chat GPT on the letter, but the university said that using it to reassure students in this area was inappropriate. And there was a huge public backlash. The other example is the Toronto School Board was accused of plagiarism by parents for using GPT. They didn't credit it. They used it in a report they sent to parents and there were media reports about plagiarism by the school board itself. So I think that's really, it's a difficult question. Carolina, I've noticed you've turned on your camera. Did you want to comment? Yeah, I guess I struggle a bit on how to use it, whether to cite it, because for example, the students in my lab, I tell them use it as a tool then use it to write your own writing. But there are sometimes in which it's a great tool to improve language construction. And so when it works like an editor, I feel more comfortable with it because we probably wouldn't be, we find say, if there's an abstract for a talk and say like we give it to someone to refine it. But it's a very fine line. And I struggle with knowing how far to take it. Like I know that I wouldn't feel comfortable to create new content for it for specific things. I feel more comfortable if it's acting like an editor somehow, but it's such a fuzzy line, right? And I don't, I wish there was kind of a more clear way so that everyone could feel more comfortable about what is considered fair use versus not. Absolutely. And I imagine this will start coming up more as we do it, but it's challenging because of the IP as well. Like if we start citing it, who are we citing? Because it's scraped all of the knowledge on the internet. And so it brings up copyright. We do have some precedents like in vivo for research is something I think in say a research study, we're pretty clear about what research tool we used for qualitative analysis. So we will mention that, but then on the other hand, Nicole put in the chat, do we cite calculators? So where does this lie? It's a really good question. How can anyone tell if something you generated was from AI? And there's another example that just came out in law. I don't know, could I get a thumbs up if you saw the law example where the lawyer got in trouble for using chat GPT? Yeah, Carolina gives a thumbs up. So it was citing fake case law. And because it cited fake case law and it was used in a trial, the lawyer was punished by the law society for that. So it's really sticky stuff. And these are great questions. So hopefully you had a chance to use question refinement. I won't get you to share the questions you refined, but I do wanna go over a couple more refinement approaches. I think these are really powerful and part of this idea of ongoing prompting. So the next example I wanted to share has been used a lot to generate better output. And it's also a good way to think about the idea of showing its work. And this is where it can be helpful in doing things like finding what perspectives it's pulling from, what ideas it's pulling from, what authors it's pulling from. So to do this type of prompting, it's called chain of thought prompting. And to do it, you ask the Gen AI to show its work step by step or explain their reasoning. This approach improves the quality of the output. So there's been research that shows you get better output by doing this and it can help you understand the reasoning. So an example here is one that I grabbed from an article that I've shared on the web. So a juggler can juggle 16 balls in total, half of the balls are golf balls and half of the balls are glue. How many blue golf balls are there? Let's think step by step. So I think this was using GPT-3, but the model had difficulty answering this question. And when let's think step by step was put in, it was able to answer the question. So by using chain of thought, we can start finding out what reasoning it's pulling from but also improve the quality of what the output is. And I'm gonna go through one more refinement and then give you the chance to refine. Reflect on reflection is another way to refine your prompts and what we're doing here is we're asking the model to reflect on its own response. So ask the Gen AI to evaluate the response that it generated and then to rewrite this response based on the evaluation. So you might ask it to evaluate this response to the question, listening at strength and weakness and then based on this evaluation, revise the answer. And when you're trying to get these better outputs by going through this revision process, you can improve what the output is. So evaluate the recipe you have just shared based on ease, deliciousness and clarity. Now rewrite the specific recipe. And I also mix this sometimes with the persona pattern. I will say act as a associate dean and evaluate this strategic plan. What criteria did you use? Now rewrite the strategic plan based on this criteria. So you can start combining refinement patterns to do this. And let's just take a look at the actual model and see this prompt in action a little bit. And then I'll give you a chance to do this one more. Oh, I don't see my prompt table. Just give me a sec here. Right, so this is an example that I like doing. I'll just pop this into a new chat. So if I say write a paragraph for a psychology 101 assignment that I need to submit. So it's gonna give me a paragraph. And I just using that as an example. So I'm just gonna, oh, cheers. Interesting, it paused on me. Give me a moment. All right, there we are. So I had the GPT create a paragraph about psychology, fairly basic, and now what I can do is ask it. Act as an experienced psychology professor at a university and evaluate the following paper for clarity and accuracy. So this is reflect on reflection. What specific criteria did you use to evaluate it? Based on this criteria, rewrite the paragraph. So now what it's gonna do is it's going to evaluate the paragraph I wrote. It's gonna say what criteria it used to evaluate it and then it's gonna rewrite the paragraph for me. So when we're creating outputs, we can go through this process again and again to keep improving the output of what we are creating. So again, this gets into refining what we're doing. So how might we use this approach? A couple of ways that I can think of it right away in education and this first approach is something I use on my own quite a bit. Rewrite the following passage for grammar and clarity. After doing this, create a table, showing what the original was, what the revised version was, and then why you made the change. So rather than just rewriting one of your emails now, it kind of acts as writing support by using chain of thought. So it said, what was the original? What were the changes and why were these changes made? And then you can take that a little bit further and say create a worksheet so that I can practice these grammatical errors, for example. Fact checking assignments, again using COT. This could be an assignment to using class. Have students asked the model to show the steps or how it arrived at a specific answer and then have them analyze the accuracy of the output and look at the steps that were used and try to find issues with those steps or how they might have approached that differently. And then output quality, I just demonstrated this one. Act as an experienced psychology professor at a university, evaluate the following paper for clarity and accuracy. So that's reflecting on reflection. What criteria did you use? Now you're getting it to show its work based on this criteria, rewrite the paragraph. So over to you now, what I would like you to do is prompt the Gen I to create a learning activity or an email or solve a problem or use one of the prompts you did before. And when you do this, ask the model to show its work and see what the results are. Once it's done that, you have it evaluate its work. So use reflect on the reflection, say evaluate this, evaluate this based on X criteria and then get it to rewrite based on the evaluation. So I'm gonna give you three minutes on this again. So just to kind of go over that, what you're doing is you're getting it to write something, a learning activity, a paragraph, an email, asking it to show its work when it's done that. So for a piece of writing, it might be asking it to show the approach it took for a philosophy statement, what theory was underpinning it and so on. Secondly, get it to evaluate what it wrote and suggest improvements and then rewrite based on that evaluation. So we're kind of building up here as you can see, it's getting a little bit trickier. You'll see you can follow each of these patterns on the blog site or on the document itself. And as you're doing that, think about how this sort of refinement might be used in your own teaching in your own practice in your own professional practice. Wonderful and Bill suggests, I suppose you could ask chat GPT to do a task, but intentionally make an error too and then provide them the product to students and ask them to find the error absolutely as a learning activity or even to do something from a particular perspective and then have students identify the perspective and the errors that happened due to that perspective or what was missing. Thanks Bill. Now give us two more minutes on this and what I'd like us to share this time is how much of a difference the evaluation made. So did that step help refine the output of the prompt? Or to what extent did it help refine the output? That's a reminder to folks as well that I've asked questions. Thank you for all your questions. We've tried to keep up. We're also answering them in the questions document which I'm gonna pop into in the chat. So if you've asked a question and we haven't answered it in chat, we've likely or will soon answer it in the document. Right, so maybe in the chat if you could put the extent that the evaluation improved your prompt or not and you can just say yes, no, or it improved it a lot, didn't improve it much, it's up to you. Go ahead. And at the same time, I'm gonna open it up to the group to see if you have questions or comments or observations. Raise your virtual hand and feel free to jump in on the mic. Grease says it changed it a lot. I think I need to refine it more because I don't actually see it explaining the thought processes I expected. And that can be a little tricky. Getting it to explain its thinking is one of the things that I've found more difficult. Michael says I asked chat GP to critique the critique it provided and provided a modified answer based on the critique of its critique. That's awesome. And it's getting quite meta now. So you can find a critique all the way down. Go ahead, Rich. No, I was just gonna, yeah, essentially that. There's a, there's actually a tool called auto GPT. And there is a Python library which allows you to essentially spin up two versions of chat GPT or two instances of chat GPT and ask it to basically bounce between the two until it gets to the right answer. So you can kind of automate this whole thing and what you've just done you can actually get the tool itself to do and it comes out with interesting things. Great, thanks. Again, any comments, questions at this point that you wanna share with the whole group? Otherwise, we're gonna move on to our last section which is about interaction and then you'll have a chance to work on your own prompts. And Louisa says, I asked what were the main causes for the collapse of the Roman Empire? And I asked it to tell it what it's based its answers on what were the causes and it decided not to share first excellent. So kind of figuring out again where it's pulling from. Tara, do you mind you had a question there? Can you say more about how chat GPT is talking to itself? I think that might be directed to you, Rich. Yeah, so essentially there is a tool called auto GPT which uses chat GPT's API and then there's a separate different tool which is a Python tool which kind of replicates the same thing. So essentially you spin up two instances of chat GPT as if you were to have two different tabs open in your browser and you would ask it a question and then get its answer, put paste that answer into the other tab and ask it to reflect on it or ask it to critique it, get that critique, put that back into the original tab and then ask it to sort of refine what it said and then you can keep going with that process until either you think it's better or you can actually automate that process until it thinks it's better. Wonderful. And so Tara says, I see. So it's like creating an agent, yeah, perfect. And Nicole mentions, I just asked why did you decide on that output? And the answer was exceptional, wonderful. So the last section that I wanted to share was it about interacting with the Gen AI and I think this is a really interesting one for education especially thinking about how it interacts with students. So there's two patterns I'm gonna talk about. One is called the flipped interaction pattern and the other is called the game pattern. And I'll give you a little bit longer to play with these. So I'll demonstrate these actually with GPT but for the flipped interaction pattern, we flip it a little bit and instead of asking it for an output, we ask it to ask you ask questions and based on your answers, generate a response. And you'll see at the very end, I included this prompt start with the first question. The reason I do this is so that it doesn't ask all the questions at once and I can interact with this. So for example, act as a learning designer and ask me questions about my learning, about my adult education research questions about my learning in my adult education research methods, 400 level in course in order to understand my goals and the course goals. Once you have enough information, write a draft course outline, start by asking me the first question. So let me demonstrate this one now. So I'm just gonna create a new chat and I'm gonna say, I'll just do a whole new prompt. So I am learning to paddleboard. I am going to, okay, I'm learning to paddleboard. Act as a coach and ask me questions to help me develop a training plan. Ask the questions one at a time. And when you have enough information, make the training plan. And then I'm gonna say start with the first question and we'll see, sometimes it still jumps ahead and ask me all the questions at once and I need to tell it, ask one at a time. Great, so let's start, yeah, see it did that. So now what I'm gonna say is one question at a time. Have you ever paddleboarded before? And I'm just gonna say, yes, 50 times. Do you have any experience? No, my primary goal is fitness. I practice, how often do you practice? I practice twice per week. I have access to a paddleboard. I plan to paddle on rivers. I am comfortable swimming. I'm interested in participating in races. I have no health conditions. And based on that, it's going to develop a training plan. For me. So again, I've been finding it really interesting to flip the script a little bit and have it ask me questions rather than the other way around. And we can start thinking about this for what it might mean for tutoring when we're working with students. The second pattern I wanted to show you before you practice these is the game pattern. GPT is pretty good at playing games with you. And again, this is useful to think in terms of tutoring, in terms of helping give students practice or just play a game. So for this pattern, you ask the model to play a game with you to develop or refine your understanding of a concept skill or subject. And you can ask it things like, I want you to play a game with me to practice apps. Make sure the game is challenging for apps and involves why. And let me demonstrate this one again, using the model itself. So I'm just going to go down to my table and grab this one here. And the game that I decided to do was around logical fallacies. And I'm going to use GPT for this because it's a little bit more accurate with me. I want you to play a game with me to teach me how to identify logical fallacies. Make sure the game is logical and challenging. Let's see if it works. So this is the logical fallacy identification game. I will provide you a series of statements and you have to say whether it's a fallacy or not. So if the statement doesn't seem to have a logical fallacy, say no fallacy, I'll give you the explanation for each one. So it gave me a few, I can say statement five, no fallacy. And I'm not really sure here, I'm just putting for it. And it's going to answer that for me. So you can kind of set up these games and practice different skills, et cetera. So what might this mean in a university setting? I think the flipped interaction pattern is really interesting when we think about tutoring. So act as an expert in student learning and ask me questions to understand why I did poorly on a quiz about soil horizons and profiles. Ask me the specific topic and my approach to studying. When you have enough information, write a study strategy for me. So now it's kind of acting like a tutor or a core coach. And games, directed practice and reinforcement, play a game with me to help me learn logical fallacies and argument, the game should be challenging for a third year university student and involve turn taking and feedback. So this is the last short exercise that we're going to do. And what I'm gonna get you to do is create and play a game with a model to practice your understanding of a topic or content. And you might have to push it a little bit to take turns with you. So use terms like I wanna play a game with you or play a game with me. And then use the flipped interaction pattern and ask the gen AI to act as a tutor or a coach and develop a plan for you to help with something you were learning or something in your life. Make sure when you do these to add a little bit of a tail to the end of the flipped interaction pattern and say start with the first question. And that's gonna help push it to do one by one questions. You can also just kind of let it generate all the questions and say I said one by one and it will say I'm sorry and it will do that one by one. So I'm gonna give you five minutes on this now. It's a little bit more challenging. So you're gonna do two things, create a game and create a flipped interaction pattern. Go ahead and you'll find all of these on the practice worksheet. It's a good question, Carolina. Can you set up a game where students can't go off topic not start talking about random things? You might be able to create a tab that did that. I mean, the only thing is students can always go on the model itself and change that. I think that's where developers are coming in and what they're doing is they're developing tools that do specific topics around this. I know that Contact North, the group that governs distance and blended education in Ontario just came up with a tutoring tool that I think they built using the chat GPT API which is defined just to tutoring but is using the same API. And I think this is where one of the ways things are gonna start changing is we're gonna start seeing specific tools around these large language models. Add to that too. You can actually, as developers, you can upload your own data to chat GPT. Privacy, insert privacy statement here. And as long as it's a large enough amount of data, you can actually end up training your own model. So when we use chat GPT, for example, in a browser, we're using a specific version of chat GPT 3.5. It's the July 20th version. I can't remember where it is, but that's the most up-to-date version of it. As a developer, you can choose exactly which model you use. So you can use chat GPT 3.5 the first one or you can use chat GPT 3.5 from March or June. I think there's a few. But you can train your own. So if you can use chat GPTs, sort of if you think of it as like the base layer, add on your own data and then you can ask and then over time, and with a bit of money, you can train your own version of chat GPT and then use that model as a developer and then you could use that. So to answer your specific question, you definitely could do that, but I don't think you can do it just using the browser version of chat GPT. You would need some development. Right, Benji. Yeah. Thank you very much. Actually, just to follow up on this, one way you can do it is probably use the RAG, right? So the retrieval augmented generation. So you provide a context. And then I was just thinking, because like in my group, I'm trying to do this like a little form project. But I think overall, if we can provide more or less a foundation for all different departments or different like disciplines. So you have, you don't directly kind of work with, let's say a GPT, right? So you have kind of like intermediate level. And then you start to put in your own kind of like knowledge-based basically. Now we don't really have to code from scratch. I mean, you can, but it's kind of repeated work, right? So I was just thinking at UBC level, if we can kind of leverage your funding to kind of find a group to build this kind of intermediate level of like model that can directly kind of interact with GPT for. And then at a higher level, like I can just go in and just, okay, I'm going to use that model and just treating my own kind of contacts information that I can use it to create a app for my own course. But just a thought, I know it requires a lot of time and money. Loosely, that's actually what several other universities have done. So UMish, for example, who've got their own and they call it UMish GPT or something. That is at a very high level exactly what they've done. But I think they based it on Llama rather than chat GPTs because we don't really have chat GPTs, but they've created their own using Llama too, I think. So you can do it, yeah. And I'm just looking in the chat. So Antonio says, I was pleasantly surprised. I want to learn how to play chess act as a tutor using tables to stimulate a chess board to demonstrate moves and relevant emojis. And Michael says, I made a game on identifying different types of roads but it started playing the game by itself. And I have a lot of trouble with it playing the game by itself. Finding I've been simplifying more, play a game with me or I want you to play a game with me so it doesn't start creating the whole game before I can deal with it. Exactly the same thing with the Socratic Method exercise. Like it would ask the questions and then answer them immediately. That's why I had to be specific. Only ask, your task is to only ask the question. Same deal. Right. And Tara mentions the custom instructions in GPT-4. So if you are using GPT-4 now there are custom instructions that you can put into it. For example, you can say, all my answers should be in the Canadian context or I don't want you to share the whole game each time. And it will do that across all of your tab. When I said, let's play a game, Bree says it left her in the dust. So that's a dangerous one. I made a unit conversion game. One of my answers was wrong and it did not catch it. Great share. Maybe a couple other folks share their game, share their flipped interaction. One game to play that I recommend is the prompting game. Say that you wanna learn how to prompt better and ask it to come up with the output and you try to prompt to get the output. I'll share it on the blog, the kind of specific language for that. And then what will it do is it will share an output. You need to prompt it to get that output and it will evaluate your prompts if you wanna improve on your prompt in that way. So yeah, sorry for that much. Thanks, because one additional way that you can think in terms of the, we didn't catch your unit conversion being wrong. The way that I try and think of it is remember the generative part of generative AI. It will try and generate something, give it something that it's going to be able to generate, not regurgitate. Google's really good at regurgitating facts and doing very basic A to B translation, right? Generative AI is really good at sort of coming up with something. So sometimes when people will put in, I suppose ask factual questions that isn't necessarily 100% what generative AI is the best at, whereas that's really good at Google because it's good at regurgitating stuff. And then folks are sharing the games, finding challenges with it for sure, where it just goes and does the whole game and this just takes some playing. Just say I want you to, I was just trying yours, Andrea, about organic composting. I also couldn't get it to just do one-on-one game with me. I will need to keep trying to play with those prompts. So it does take playing and it does take work on thinking about those. And again, I mentioned use that tail pattern at the end, say turn taking, or you can say your turn first is another way to kind of cue the model that you're playing with it and not to give you everything at once. So that kind of brings me to the end of the patterns that I wanted to work through with you. And what we've done today broadly is Verata, Rich and Joe touched on those high-level points around what Verata touched on, what a transformer is, what large language models are. I thought Rich really delved into a prompt and then Joe was able to go through the limitations. And then what I did is I looked at those individual prompt patterns and we had a chance to practice them. So we have about 12 minutes left. You're welcome to leave now or you can stay and work on that one prompt that you might have brought with you and turn off your camera, turn on your camera, work through that prompt and ask us questions as you go in the chat. But overall from this group, I wanna thank all of you for taking two and a half hours out of your time and going through these prompts with us. And I hope it really helped you think a little bit more about using these tools and maybe for some of you just showed you intuitively what you're doing well or what you may wanna change. So again, encourage you to stay if you want. You're also welcome to leave and I really appreciate your time today. And you're also, you know, you can either work on your prompt or you can just throw us larger questions or ask the room questions because yes, the slide deck is actually already shared on the blog. So if you go to the Promptathon blog, we've embedded the slide deck on that. We'll also follow up with the whole group. And Hannah, yes, we will also make that available. Yes, good question. Are there GenAi tools that can process longer documents? Actually GPT, there are a few and GPT-4, already you can do that. I use something called Code Compiler, I think, which is built into GPT-4. And what that allows you to do is upload PDFs and upload Excel documents. So you can do things like I am put in the budget from 2023, the BC budget. And I asked it to, I wonder if I can do this. I won't do it now, but I asked it to compare education and health spending over time. And it's able to run Python scripts against the data tables, find out which data table to pull from and make histograms, but it's also able to look at longer PDFs and analyze those PDFs and analyze the data. Also GPT-4 you can play with and you can input up to 50 pages in a copy paste. To do that, what I do is ask it at the very beginning. I say when I put a word in square brackets and I say when I write no output, do not output anything until I write output. So in GPT-4 by using that or just by telling it all the time, don't write anything yet, don't write anything yet. You can input up to 50 pages and get it to do things with that extended amount of text. Rich, did you have anything to add around that as well? Yeah, loosely that's kind of called priming really. You're splitting a very, very large document into chunks and you tell GPT that GPT, it does work. I've used with a few other tools actually that it does work with as too. You basically tell it what you're going to do and it kind of gets it. You break things into chunks. I can't remember off the top of my head what the maximum amount is. I think if you try it, it will tell you. And then you split those into, you have to kind of do it manually or there might be some tools that help you split it into the right amount of chunks. And then you paste that in again with Lucas's edition that don't write anything until I tell you I finished. And then you prime it ahead with all that context. It's really useful. It's a really good strategy. And someone mentioned, I think it's called chat PDF. I don't really know the veracity of that site, but you can upload PDFs to it and have them. I think you can talk to PDFs through that as well. And Tara says, yes, there are chat GPT prompt splitters. Thanks, Tara. With no output, is it you are uploading PDF documents or sending a text? I can speak to the second part. I'm not sure about the first one. Using, so what I was doing with no output is a couple of things. One is by using, just to mention, you can use things like square brackets and syntax to highlight certain things. So I didn't mention that during today's session. I was referring to copy, pasting in text into it rather than a PDF. Other questions, comments, hopefully y'all are working right now on the prompts that you brought in. How do you save material answers for future use? So a couple of ways, Patricia, let me just share screens with you briefly here. It depends on the tool, but if you're using chat GPT, Lukas is going to do that. Did I lose my GPT form here one moment? And I'll start sharing again. So yeah, with GPT, there's a couple of ways to do this. So one is around the side here, Patricia. All of these are going to, they just saved. So I can go all the way back. It's organized by time and these prompts are saved. So here's the fallacy game, et cetera. But also if I go up to this link here, I'll just annotate it for you. Just so you can see it one moment. So this link right up here and click on that. That's going to take the whole conversation and you can save a copy of that conversation. So if I scroll down to the bottom here, I can copy a link to that. And that actually allows you to share it with other people. The third way you might think about doing it is some folks will suggest creating a prompt library where what you're doing is you're saving every prompt that you make and the output. And some people are doing this via organization. So their whole organization is on a shared library. They're sharing different prompts and outputs so that they can always revise their prompts and they know what's working. So we have about three minutes left. I'll kind of open it up for any kind of final comments, final questions, things that you think we should have touched on today. And Tara mentioned, please consider workshop and agents and plugins. Great idea, thank you. And thanks for joining us. I appreciate all the wisdom and experience and understanding you were sharing.