 It is my esteemed pleasure to introduce you to Nora Scott, who is the Gommel family professor and senior associate dean in the College of Business at Oregon State University. We're thrilled to have her here. She has research expertise in the area of sustainable business, business law, environmental law, and graduation. And also an international renowned expertise in artificial intelligence and AI, so we're really excited to have her here. I believe I sent a recent article link for her. She has a recent publication from 2024 in the Journal of Legal Studies Education about meeting the challenges of generative AI, and that she's here to share with us in the title of the presentation. Here's what now. We're designing the area of generative AI, and so thank you professors for being here today. We are recording the session today, so we will leave the link available to you to share widely. Thank you. Well, it's a pleasure to be here. I love talking about generative AI, and I know it can span a huge range of folks from those who are not using it at all, maybe to some folks who are familiar, or maybe you're setting up your own GPTs or using it to create chatbots, and we have all sorts of stuff going on where I'm at. I am part of Oregon State University, and when the original chat GPT was launched, I was the associate dean for teaching and learning, and my heart is very... So I started getting a lot of emails and questions from faculty who were saying, oh, I'm seeing it, so I teach in a business school. Google is saying red alert. There's a lot of fear, anxiety, or questions about what this technology is going to mean for us. So I guess I was sort of an early adopter. I started using chat GPT back when it was still free to everybody, and then I got a premium account when those became available, and I have found at this point it is deeply integrated in just about everything that I do. So I use it to help me plan my syllabi. I use it to help me plan meetings for my leadership team. I use it to help me crunch budget numbers. My husband's also a K-12 superintendent, and he uses it all the time, and we talk a lot about the promise of AI for providing individualized instruction and support for students, and so we're both really excited about the potential for AI. There's a lot of concerns about bias and inequity. We're really excited about the promise of AI to actually help address some inequities and provide more support and more structure for students who don't necessarily get it right now. So I'll just set it up that way that I am not fearful of AI. I think it has to be regulated carefully, and we need to think deeply about it, but I am really excited about the promise that it gives us. So I think I want to just have a few words so that I can hear a little bit about your experience of AI. So if you can think about, I'm going to have you do a little, it's Center for Teaching and Learning. What's your program called here? Center for Teaching, Learning, and Scholarship. Your group will approve, so we're going to do a little active learning. Everybody needs to think of one word or two words to describe your personal experience of AI, and you can write them down. If you have something open, you don't have to, but just think of a word, one or two words, and then we're going to share our words out. Let's see, anybody want to volunteer? Should I call in people? Yes. Fear. Fear, okay. That's always one. Fantastic. Fantastic. I love it. Yes. Imitative. Imitative, okay. Middling. Middling? Okay. It's like kind of, yeah. I can't do everything. Okay. All right. Unrealized potential. Unrealized potential. I said it could be two. He had permission. She said a word or two words. I said a word or two words. I did give permission for that. Cyborg. I love it. I wouldn't mind being a cyborg, I think. Yeah. Efficient. Okay. Great. Handy. All right. It's good. Your objectives, your perspectives and experiences everywhere from fear to unrealized potential, that's pretty much what I think is true for most groups that I talked to and for most folks in higher education. And I think that's all realistic. I think everything that I heard is all totally realistic from the fear of what this could look like and what happens if we don't regulate and we don't think adequately about the impacts of it and we sort of just let things happen things happen organically, right, all the way up to, yeah, this can be an incredibly powerful tool, and we have only just begun to think about how we can use it. And similarly, it's not just going to fall into our lap. It is going to be a tool that has to be very consciously and deliberately adopted and thought about how you want to adopt it. So I'm just going to talk really, really briefly, just so that we're all on the same page about how these things work. So my dad was a professor at the University of Buffalo in computer science and then artificial intelligence and engineering, and he's been among all of those disciplines. So I was just, I just saw him this weekend and he helped school me a little bit because I had a couple of things wrong. So you got to always go back to dad always knows right. So just a couple of tips about what generative AI is and what it does. So generative AI, we have to distinguish first between AI and artificial general intelligence. So artificial general intelligence would be a computer that can actually think like a human or is capable of human thought. We do not have that, that does not exist. There is no general artificial intelligence in the world. And there are some folks who are deeply skeptical that we will ever get there. And then there are other folks who feel like maybe it's five or ten years down the road, so there's going to be a lot of difference. But the stuff that people are most scared of doesn't exist. So the computer that's going to make some decisions for us and maybe take over our society, that would be the artificial general intelligence. And that's not out there. So if I can take that off the table for our fears. So what we have are different types of artificial intelligence which are limited. And they're basically, I just think of these as tools that can react to their environment and then produce something in return that we have asked the machine to produce, right? So we've given it some clues about what we want it to create. And it's doing that, but the really amazing thing is that it's responding to its environment. It's taking contextual clues, it's taking in information, and then making a response. So generative AI is basically, I like to think of the large language models. Those are like Chat, TPT, Gemini, if you've used Copilot, which is integrated. There are a lot of different Lama, Claude. There's a lot of different types of large language models. There are different people, different companies own them and utilize them. But the large language models basically just imagine dumping everything in the internet and all of the language primarily in English that is out there and you dump it into the same database. And then you give it to the machine to learn from it. So what it's gonna do is it's taking this huge database and then it knows how to create data as an output that is based on the contextual clues that it's learned from what's in its database. So it has learned from us, right? What it produces, you can think of it as crowdsourcing. You can think of it as a most likely outcome. You can think of it as sort of just looking at the context. So the way the word strings actually come together is sort of fascinating. Because what that huge model is doing is actually saying, when I see the word and, what are all of the things that I know about the word and? Well, it's usually got a noun on either side of it. Well, it usually appears in this place in the sentence. When I see the word generative, I know that a lot of times the word AI is after that, right? It's got all of these but thousands of them, tens of thousands of clues. And then it takes the questions that we give it and it comes out with the most likely answer based on the information that we've given it. So we can think a little bit about what are the problems that will come out of something that is crowdsourcing answers to our questions. Let you think about that for a second. But first, okay, so how do these new machines actually learn what they're doing? I like to use images a lot because they can help us understand what's happening at a textual level. Imagine that you are a computer and that you need to figure out which of these things is a golden doodle and which of them is fried chicken. So this is actually quite difficult. If you're someone who loves memes, you can look for a pit bull or a potato. That's a really good one, a doxen or a doughnut. But they're surprisingly complicated. And so it's gonna try to figure out, very often based on contextual clues. Well, I don't usually, so this rate. So here's a particular challenge. I usually see dogs, I know it's a dog if it's in the grass, that's probably not fried chicken. But here's some fried chicken that has green next to it, that's confusing. I think of fried chicken being in a bucket, but this guy's in there. So it's gonna make guesses and then it gets trained by humans. Humans have to go through and they rate the answers that it gives and they tell it if it's right or wrong and it learns, okay? So this is happening on a tiny individual level here, you can see this example. But this is happening on a massive scale for the large language model. Okay, so what are the limitations that we can think of based on understanding that this is a crowdsourced model? This was an image I asked a program called Dali, which is one of the image generating programs. This was actually about six months ago, maybe seven months ago at this point. I asked it to make an image for me of a college professor surrounded by a group of students. Dali at the time was not very good at hands. So you can see she's a little bit alien like as are they all. But what stands out about the college professor that it has created? It's a white man, he's of a certain age and of a certain girth. And surrounded, right? He's the sage on the stage and the students are all a little bit alien like. But again, they're sort of white-ish and young-ish for all that they're a little alien. So that to the computer was the most likely reasonable, helpful outcome it could make when you say show me what a college professor looks like, here's what a college professor looks like. And there are indeed many college professors that look like that. There are also some that don't. Okay, so what does it do now? It has been improved. This is Dali just about a month ago, same query. But this is what it gave back to me most recently. So woman now, the students are more diverse. They're still young. She's sort of in like a classroom that looks a little bit more K-12 to me, maybe than a college. But that's very different, right, than this previous image. See if I have another one, I asked it to try again. Still have some wonderful issues with like the legs, right? So not sure whose legs those are or where her legs went. But this, when you look at the caption that identifies it, it actually identifies this man as an African-American professor. So it is, and those are the first two images that came back to me, right? So clearly there has been a change, and it's in the up or down votes and the helpfulness factor that all of those human people who are rating how well it's doing that are training the AI are telling it, hey, we don't want to replicate some of these existing biases the same way. But what is so fascinating about this to me is that you can see so clearly how it has changed. And that is both pleasant in some ways and disconcerting in other ways because it shows very clearly that what it is going to come out with, the generative part of the generative AI is very susceptible and is created by humans. Human agents are intervening, right? So we need to know that. We need to understand the technology is not, the technology is not God-like. We have control over this technology, so it is incredibly important for us to understand the technology and be in a position to teach our students about the technology and how it's going to work. Okay, so that brings me to what I would consider the essential skills for an AI-assisted future. So when I talk and I'm at a business school, so we're not as nervous about talking about the fact that we're preparing students to go into jobs, but I know that makes some people uncomfortable. But, you know, when we think about how are we preparing our students to walk through the door of their job, are they prepared? I don't worry that they're going to be replaced by a computer. That's not my fear. My fear is that they're going to be competing for that job with somebody who really knows how to use AI really well. And they are going to be less efficient and they're going to be less effective at their job. So that is why learning these tools is so essential to our students. Again, they're not going to be replaced by computers. There are jobs that will be replaced by computers, but the jobs that they're getting, by definition, have not yet been replaced by computers. And there's going to be a premium on human skills. So what are the human skills that will remain? I'll come back to this picture because it's great. So the essential skills that will remain, think about what do we do in context that is essential? So what the AI lacks is context, human connection, communication, and the ability to truly innovate and think cross-disciplinary-wise. So what are some things that I think we need to really focus on with our students? Well, the number one thing is critically assessing AI output. So I'm going to go back to this picture for a second. So I asked the AI to create an image for me of the evolution from human into robot, the cyborg, right? And I was thinking of that. There's a famous picture of the evolution, right, from the ape to the human. So I asked it to create a picture in that style and to go from the ape to the human to the robot and then it threw some super interesting things in there, like a fish head. Why? No idea. I definitely did not tell it to put a fish head in there. So AI does some weird things. It is not always right. So being able to critically assess that AI output, knowing our own field well enough to be able to assess what it does, is essential. We are not done teaching students. They still have to learn what the content is, but maybe not the same type of content, maybe not as much content, and maybe the thinking skills are more important than some of the content. There's some content that's still essential. I'll talk about what I think that is. Okay, so critically assessing AI output tops on the list. Can't see it as much because it's in yellow there, but on one side you'll see communication and on the other side you see language and writing. I talk to students a lot who are really nervous and I talk to journalists and students. I talk to students who say, my voice won't matter. If everything is computer generated, it all sounds the same. It just takes all the life out of everything. I'll say two things in response. One of them is your voice is going to be even more important now because if you are, and this is particularly true for our marketing students, our communication students, our literature students, our journal students, everything, they're going to be swimming in AI created content. Their unique voice is going to allow them and their content to stand out in a totally different way. So create your voice, your unique voice, just as if not more important than ever before. And the other thing that is important is understanding our own communication because we are communicating with the AI. So part of what we have to understand and we're going to have to learn how to break down are what are all of the pieces of information that we're giving when we ask a question to AI, we call them prompts, when we do a prompt, what are the things that we're saying and what are we not saying and what did we forget to say but should have said. So sometimes the language that we use will communicate something to the AI that we did not intend to communicate. Sometimes we will fail to communicate some essential information. All of that are language skills, that's all communication skills, all incredibly important for our students. Learning how to engage in team building, communication, how do we emphasize, how do we form relationships? We hear all the time from employers how incredibly important that is and we're already not doing as good at it as they would like us to be, right? That's always consistently if you're familiar with the NACE competencies. One of the tops, those are the skills that employers would say students need. One of the top ones is always groups. How do we work in groups? How do we do relationships? How do we manage projects? All of those skills are absolutely essential to our students. Engaging in problem solving and then the ones on the bottom I'll talk a little bit more. Addressing bias, making innovative leaps, so retaining the ability to engage creatively as human essential for us. That is how change really occurs and if we stop prioritizing or don't think about how do we deliberately and consciously embed that in our curriculum, we're gonna be in trouble. And then the last words, the last group is magic words and then content specific knowledge. I'll talk about that too. So let's talk about bias for a second. If you haven't used chatGPT, this is what kind of the string looks like. So there's a spot for you to ask a question and then it'll provide these nice responses. So back when I first started playing with it, I liked jazz. I would ask it for a list of the top 20 jazz pianists and they were always all men. I would just spit out this very comfortable list of usually the same folks. And then maybe two or three months ago, I started like a woman would pop in every now and then. I'd be so excited. And it would actually start giving me more context. So now when you talk to it and you ask it, give me the top 20 jazz pianists. It says, listing this can be subjective as it often depends on personal taste, the era and the style of jazz. So it's actually learned how to give context to the person who's asking the question. It is improving my question in its answer. And I can actually ask it to improve my own question. So I can put in the prompt and then I can go back and say, what would have been a better prompt? And it can actually figure out an answer to that. So now it is giving me context as a list for me but here's where I can query it and try to get in and dig into some of this bias. I noticed there's only one woman on the list. Why do you think that is? Well, it actually gave me a really good response because it told me here some of the factors that you should consider. Historical exclusion and gender bias, limited visibility and recognition, social and cultural norms. This was not the whole answer but I picked off a piece of it. So it is able to analyze its own biases to some extent. So we can teach students that are using generative AI to also use generative AI to question themselves and their biases and the biases and the responses that they're gonna get. Because we exist and we walk around in the world and have all sorts of embedded unconscious biases. We live in bubbles, we all know this, research well. We don't even necessarily know where we are asking questions that are limiting the output that we're getting. And so being able to engage in some of these dialogues can actually be really beneficial. I do work in sustainable capitalism so I put in a query here, is capitalism linked to white supremacy? That by the way is an image of a capitalist. So I asked it to give me a capitalist, that's what it gave me. And so it gave me a pretty complex response. The question of whether capitalism is linked to white supremacy is complex and multifaceted on the babies and output. Now I've asked this question in different ways as well. I've asked the question, explain how capitalism is not linked to white supremacy. Different response, right? Totally different response that I'm gonna get. I can ask it and I can say, how is capitalism linked to white supremacy? So this is where the communication skills are so essential because each one of those questions is gonna give your students a different answer. It's gonna give you a different answer if you start using this more. So it allows us to actually look into our own selves how am I changing the output? Think about research you might do. How are you changing the output of your research by the research questions you're asking? Okay, so what about content? What do we teach? What stays? What goes? This is probably the hardest question that we have in front of us. And I'm happy to talk a lot more about some ways that we can use it for sort of non pedagogy and not in the classroom, but just to pause on content for a moment. So when I talk to faculty about their courses and what dictates what's in your syllabus, I would say that the number one thing that dictates what in the syllabus is not, this is what my students need in order to be successful after they graduate. It's more like, well accreditation or I'm the prerequisite for this class and they said they needed this or the textbook does this or the person who developed the course and I don't have control over the course that did that. So this is not new to higher education. This is not a new question that we are struggling with. What content stays and what content goes? Are we teaching the right things to our students? This is something we should be struggling with in all of the contexts, right? Not just AI, but the AI really sharpens the problem. So when we think about what content stays and what content goes, this is my framing and the article that I shared with you all kind of takes this idea and this framing and fleshes it out more. But basically I will look through and I have broken out types of content into these buckets and said this type of essential content needs to stay in our curriculum and by implication the stuff that doesn't fall in a bucket probably could stay. There's the right place for it or reason for it but it doesn't have to, it's not essential. Let's talk about what each one of those are. So I teach law and these are some examples from law but you'll find these are really very discipline specific. So I can't tell you what is essential content for a chemistry class or a marketing class or a literature class, right? I can only give you some examples from my experience. So the first bucket that I talk about is structural basics. How is the field organized? We can think of this as sort of the medic cognitive level of how do I think of and conceive what happens in this discipline? And we probably need to teach a lot more of this anyway. This is really important for particularly for first generation students, for students who are not coming into our context with a really privileged background where this stuff is second nature to them. This is teaching us about what the structure is that sometimes we ourselves are blind to and it forces us to break things back down into component parts that we really think about what are the structural basics of our field? So for me, that might be things like what even is a legal theory? What does that mean? If I talk about the elements of a legal claim, that's how we break out and how we prove things in the law, I have to teach students that. They don't walk into my classroom knowing that, okay? And then I can think of key content areas and those key content areas are basically areas of content that are either themselves and by themselves essential or they can serve as an analogy for a way to learn something else. So I put intellectual property or IP on there. Students about trademark, I can teach them about copyright and they can find that there's a very common pattern. There are state statutes, there are federal statutes, there's statutes, there's case law. I can teach them how to read a case. All of those things, I can teach them for that particular content but then when they are learning employment discrimination, they use that same structure and they can actually really teach themselves a lot of employment discrimination law. I don't need to spend a lot of time teaching them employment discrimination because they can use those computers in their hand that they bring everywhere we go. They don't need to memorize the rules for employment discrimination. They can look it up on the fly because they know how to learn. They've learned that tool, that metacognitive process. So having some key areas that are analogous to other things and then I talk about magic words. I actually got caught by this. I was talking to my dad about this one again and he said magic words. What does that even mean? I said, oh that's funny because magic words are actually magic words. In the law, you call something a magic words when they have a specific meaning. So the phrase, what is a reasonable person? That has a specific meaning in the law and court cases actually refer to magic words. So magic words are magic words. And I thought everybody knew that. Apparently you don't, I'll know what magic words are. So I didn't just make that up, magic words are magic words. So what are the key phrases that actually change the outcome of your queries? And for your discipline, if we were gonna do a workshop with your faculty, you can break it down by discipline and you could say for your engineering folks, for structural engineering, what are 10 words that if they don't know what those words are, don't know how to use them in context, they're gonna get the wrong answer if they put a query into a box somewhere. So we need them to know these essential concepts. Example in the law, the reasonable person and reasonable accommodations. If I ask chat GPT and I say, one of my, I have a small business, I'm a small business owner, one of my employees has asked to have a device that allows them to speak to text, right? They want one, is that reasonable? Okay, think about that. Now I could also ask it, same thing, my employee has asked for this device, is that a reasonable accommodation? Absolutely different answer because the phrase reasonable it causes the computer to think, oh, I know, reasonable accommodation, that's connected to this thing called the Americans with Disabilities Act. That's very different than what's reasonable, which is not necessarily connected to the Americans with Disabilities Act. So those are really important concepts that are gonna be unique to each discipline, okay? And then really at the, again at the metacognitive, at the problem solving level, how do students and how do you want them to make decisions in your discipline? How do you want them to go about engaging in problem solving? In the law, we have a very specific, people talk about thinking like a lawyer. We have a very specific way we go about doing that. Science, of course, has a very specific way that we pose questions, the scientific method. We want our students, they don't know that biopsmosis. They have to be taught these skills. So we need to identify them for our disciplines and specifically teach them to our students. So within those structures, how do they test the concepts that the AI has given them? Well, in the law, I wanna teach them, here's how you find a real statute. Here's how you find a real case. Here's what it looks like to read a real case. Let's get the experience of reading a real case. What are the reliable websites I would send them to where they're gonna be able to test the information they've been given? Decision making, when students are making decisions, I want them to think in this sort of trifecta of what are the financial risks of a business law decision? What are the legal risks and what are the strategic risks? That's a decision making framework that I want them to use when I give them a complicated hypothetical. So what are the decision making frameworks that you want them to use? Because if I go back here, problem solving is one of the key elements that students need. We hear this all the time from employers. I don't want a student who just knows how to do one thing. I wanna be able to give them a complicated task and have them figure it out. They don't know that just by living. They learn it, right? We have to teach them. And then we really need to know we're in our disciplines. Again, very discipline specific. So bias exists in all sorts of places in the legal system. Some of them were very deliberate for a long time. White men were the only ones who could serve on juries. There are laws that excluded people from ownership of land, right? So we know what some of those look like. But as a business owner, bias might show up in my hiring practices. Bias might show up in my governance structures. It might show up in the way that I structure my product offerings. And then when I know what those areas are, when I ask ChatGPT, can you come up with a new product or develop a marketing campaign? Then I can also ask it, how might this cause a biased result for certain customers over others? Because I knew what question to ask. I knew that this was an area of potential bias that I needed to be thinking about. This is for the real pedagogy geeks in the room. I don't know how many would identify that. But our online learning service came up with this new Bloom's Taxonomy that I think is actually quite helpful. And what it does is it actually looks at, for each of the different levels of Bloom's Taxonomy, which we already applied to the way that we put our courses together and the learning outcomes that we use. And it looks at what about that skill can be done by AI and what cannot. And then asks, okay, so when you are integrating this into your class, what does that look like? If I have one of my learning outcomes is that I want students to be able to remember something, well, the only thing that is really unique about this is if technology is not available. Technology is available, students do not need to own that task. So this is a really helpful tool and I'm happy to share it online. And I think it's got a Creative Commons license on it. So you're welcome to use that yourselves. Okay, so my ideas for how to integrate AI into your teaching. These are things that I have done with students and that folks in my college have done with students and found to be really fun. Looking at AI responses for areas of bias. So Oregon State University has a license now with Microsoft so we have access to their co-pilot in a private. So you all will have to figure out how comfortable you are and what rules for students using these tools. Cause that is something that you have to figure out, right? But, so if students use AI, can they have it analyzed its own or is this just a fascinating thing to ask it? What might have shown up in your response that was biased or why? The next one where it says take AI text and rewrite it so it's less than 40% AI generated. So I was actually the college hearing officer for my college and so I dealt with all of the academic misconduct cases for five years, including the start of the chat GPT revolution. And one of the first responses we heard from some people, the goodness not most, I would say some, was it's all just plagiarism. They're all just cheating. The first thing I need is a tool to catch the cheaters. And there were immediately a number of these plagiarism detectors, turned it in, came up with one, it was terrible, it removed it. But there's one called zero GPT, there's one called GPT zero and you can put in text and it will tell you the likelihood that it was generated by AI. They are not reliable and they are biased against English language learners or folks who don't speak English as a first language. So I strongly urge you do not use them for academic integrity cases to prove that a student cheated. But here's where they are really valuable, is that students can see in their own text how much they look and sound like a machine. And then they can look at it and say, I'm not very valuable if the machine did all the work here. So in business we talk something, what's your value add? Like what did you bring to the party? And if what you brought to the party was about 10%, I don't know that I need you, right? You're not that valuable to me. And I don't know that your writing is gonna stand out, I don't know that you really understood what you wrote about. So students can do this themselves. They can put their text in, they can edit it, and they can see what does it take to get it so it's less than 40% AI generated. A presentation or an infographic or even an essay targeted to different audiences. So this is absolutely fascinating. If you wanted, let's say that you had a land use issue, and you told it, my audience is gonna be a group of mostly white, upper class, college educated people. Now make me a presentation. And then you could say, my audience is going to be mostly Hispanic, first generation folks who have primarily a college education. That's gonna be different, right? You can target and what the students will see is what is the impact of your bubble, right? On what is persuasive to you, and probably what you're hearing from sources. Because what the AI is doing again, it's not doing something that it came up with. It is reflecting back what we do. It is reflecting back what humans do. And it's saying, oh, let me figure out in my database what looks the most like that group of people and the way they talk and the kind of material that is aimed at them, right? So it's been an illustrate for you your own biases by having it target to multiple different audiences. Been talked about quite a bit, AI makes things up. Just like you put a fish head very randomly on something, it makes things up, okay? And so you always, always want to caution your students that they always need to double check. Particularly if it's sort of a journal article, something with a page number on it. Think about the way that it works. It's telling me, when I see this series of letters, it's usually followed by this series of letters. So it comes up with very plausible article citations because it's like, there's usually a number. There's usually a name, right? It can say, Anara Scott, oh, she usually publishes in this journal and there's usually these numbers. But it's like 21 instead of 20, it's just made up. It's just making stuff up. So have students look for reliable sources. Use AI capabilities to increase the scope of an assignment. Your students are probably using AI. I think something in the latest survey I saw was something like 65 or 70% of students are using it. Only I think 30% of professors were. So we got to catch enough to do, right? We need to be conversant with the tools our students are using. But one of the great things is that instead of saying, okay, the only way I can do assignments is I'm gonna put my students in a room and I'm just gonna give them pencil and paper. I'm just not even gonna let them access the internet. And that is a technique folks use. What if instead we recognize, this is the tool they're gonna be using in the workplace. What does that mean for the capabilities that they had, for the efficiencies that they could gain, for the different ways that they could go about a task? So instead of assigning groups of students to come up with one business product idea, have them come up with 20. And then they've gotta do the task of assessing for our region and what is popular for us in the Pacific Northwest, what would be the best of those products? So making assessments, again, of the AI output. We need to learn how to use AI safely and responsibly, so do we. One thing we can do is we can teach them more about what prompts look like with different large language models and the differences that small changes in the wording. So my query about white supremacy, show me how it's linked, show me how it's not linked, is it linked? Three different queries will provide three different responses. Have students change their own queries to think about what that does to the output. Develop scaffolded assignments that teach students how the LLM can provide them with feedback. So one of the most wonderful things that this technology can do is it is a built-in tutor for your students. You can't provide the kind, we would all love if we were sitting in a classroom with three students and we could just mentor and so we could socratic talk to them back and forth. We can't provide AI, I can't. So think about what you could have them do to get feedback on their work. Then you can have them provide you a copy of the chat so you can see how they interacted with it. So some ideas might be start with an outline of an essay and then ask the LLM to provide feedback on your outline. Then show me, Professor, how did you change your outline as a result? What were the changes that you made? Have I give you feedback? What do I need to do different things with my sentence structure? What's some general feedback you can give me about my writing? Maybe it's a math assignment. I keep getting these problems wrong. What am I doing wrong? It has a built-in ability to look at students' problems where they're struggling and provide one-on-one feedback and attention. Again, that we as faculty would love to give them but we just don't have the time. We have too many students to be able to do that. So this is where it can be really a powerful tool, frankly, about that one. Okay, and then finally assessment. Your students have the ability to cheat on every single assignment that you give them. It can pass the bar. It can pass neuropsychology entrance exams. You are not gonna come up with something so clever that the AI can't figure it out. It's gonna be able to figure it out. Now, it might not do very well at it for a whole variety of reasons. And so I know a lot of faculty who will say, that's fine, if you wanna work with the AI, go ahead and do it. And they find that the students who use the AI exclusively, who don't engage in a back and forth with it, who aren't applying their own critical thinking to it, don't do very well. Because it's just crowdsourcing. It's not responding to the contextual clues of your classroom. It's not responding to the individual ways that you teach and that you're looking for. So they can cheat, but they probably won't do great at it. But they can cheat. And particularly if you use multiple choice questions, if you have something that has a black and white sort of answer to it, it's gonna be really good at that. So what does that mean if your students can cheat? We could play an arms race to try to outwit them. I tend to think that's not the best use of our time or resources. I would rather that students see the value in each with those assignments because they see the value. Your students do not want to waste their money or their parents' money or a family or a scholarship on something that they think is useless. A bunch of money for tuition and then cheat their way through school. They actually wanna learn stuff. They really do. So I really think that because I hear that sometimes and I tend to think we've actually done that to them because we have taught them just sit and repeat, just sit and answer the multiple choice questions. Just play this back by road. They learned it in high school. They learned it in grade school. And then we come to college and we're like, oh, now you should explore ideas because of the love of learning. It's a little unrealistic, right? We're asking more of them than perhaps is fair. But if you show them, this is actually gonna be essential to your success in your job. This is gonna be essential to your ability to solve problems. This is gonna be essential to how you interact with other students in my class. Now that they see the value of it, that is gonna create a different response in your students. If you're assigning busy work, no, your students are not gonna feel any particular, do you love doing busy work? Do you respond to all the emails that you get? I think you don't, right? So we as humans do not like feeling like our time is being wasted. And that's fair. So our students, if we're gonna assign them work, it is fair for us to explain to them why we are assigning that task and why it matters. So engaging students, finding ways to deeply engage them in the material, that's important. If your class focuses on the lower ends of Bloom's taxonomy, that's gonna be a particular challenge. Those are the courses that are gonna need the most revising. And then the final bullet, and this comes out of doing a lot of academic integrity cases, is that we ourselves need to understand in advance how the technology works so that we can adequately communicate to our students what is okay and what is not okay. The most unfair thing that we can do is to have students think, I was using Grammarly, isn't that okay? Everybody uses Grammarly. Oh no, I don't let you use Grammarly in my class. Well, how was I supposed to know that? Well, you should just know that. Just because it's plagiarism, it's cheating. And so for many students, that's how it feels because there's a whole spectrum of ways to use this technology. You can plug in, write an essay, blah, blah, blah, right, and it can spit out a whole essay. That's very different from, I just wrote this outline. Can you give me some feedback on my outline? That's very different from, here's my finished essay. Can you give me some pointers on my language? All of those are totally different uses, but if we as faculty don't identify what is okay and what is not okay, we are setting our students up to fail. And that's not fair to them. But we have to understand the technology well enough ourselves to be able to communicate that. And the last thing that I'll say, and I'm sorry not to leave more time for questions, but this was a, I was having some fun with images in Dolly. So I asked it for some images about jumping onto a speeding train. And it took, I actually had to tell it to put a woman in there. It was, it did not, it does not think that women jump on speeding trains. It is pretty sure that the only one dumb enough to jump on a speeding train would be a man. So it did finally get a woman. I said, please make an image of a woman jumping onto a speeding train. It's going really fast. The technology is changing incredibly fast. And our temptation may be, I'm just gonna wait until things settle down. And then once things settle down, then I'm gonna give it a try. It is gonna be impossible to wait for this train to stop. It is developing at an incredibly rapid rate. And the only thing that you can do is jump on and join it midstream. You're gonna fail. You're gonna try some things. They're not gonna work very well. Your students are doing the same thing. Learn together, share that vulnerability with your students. Hey, I tried this thing out. It went terribly. How did you do it? What worked for you? Learn with your students. Share with them what you're doing. But we all need to be looking at how we're jumping onto the train because it's not stopping and it is gonna be essential. And that is, that's it. We just have like five minutes. Yeah, happy to answer questions. Yes. I've been teaching GPT for almost three years now. And something's come up this semester which I wasn't expecting. And that is students seem to associate stigma with using chat and the team. And I'm very clear saying, use it. In fact, a lot of assignments they're required to. But for the other assignments, they have strict instructions on show me the prompt, show me what it generated, show me how you use what it generated. But students are very reluctant to do it. One of them will try to sneak the chat GPT stuff in without giving any credit. I pointed out that's plagiarism. You're in a lot of trouble. You can have no seas. Yeah, yeah. And but there's a stigma associated. And it's just, I mean it's hard to deal with. Yeah. To say, look, it's okay, you can use this. Yet they seem to feel, no, no, I shouldn't be using this. This is a great question. I'm gonna repeat the question just for the video that we're seeing students who feel like there's a stigma. And even if they're told, I actually have one of my statistics faculty members, he has the exact same experience they can use it. They use it but don't tell him. And he tells them, tell me if you said anything that they don't wanna, still don't wanna tell him. And I think this is incredibly important. So we, and I think the problem is that, so given, so I was visiting with my sister and her husband and my nieces over this past weekend as I'm traveling, every single one of those four people asked me like, aren't you scared of it? Isn't it cheating? So there's this sort of societal fear that either it's really bad for us on some deep ethical level. Using chat GPT is cheating in some sort of life existential sort of way. And I had a prolonged conversation with my niece. She works for a organization called Ceres that does really important work with sustainable business. And I said, well, have you used it in your work there? And she said, no, and it doesn't seem right. And I said, well, you're doing really important work. If this tool could make you more efficient, isn't it unethical not to use it to further the work that you're doing, right? To be more efficient in your job? So I do think you're reflecting something really important which is that there is a societal fear that it is either unethical or, and spoken or unspoken, there are probably students whose friends have gotten in trouble for using it. And so now they're like, ooh, we don't want to, we don't use it, it's cheating. So it just means we need a cultural, we need to have the conversation. Everybody in the university needs to be talking about this. Every faculty member should have an AI statement in their syllabus, every single one. Nobody should not address at the beginning of class what is okay and what is not okay because it's the folks who aren't addressing it that are creating that fear in students that they don't know what's allowed and what's not. So I appreciate you saying that. I hope that that's part of the conversation. Yeah. Maybe one more question? Yeah. So this is all based on training models using historical data. As we are getting on the train, are we putting less and less focus on creativity and coming up with something totally new? I actually, so again, I go back to the conversation about voice with the journalist that I don't think so. I've had wonderful conversations with folks in writing, liberal arts, people who are creating art that is AI enhanced, visual journeys, sound journeys. I think it allows us to engage in a different type of creativity. The models themselves may reflect what came before, but so do we. We are ourselves a product of what we learned which came before. And then it's the way that we put that stuff together that reflects our creativity. So I think there's enormous promise actually to spur different types of creativity and to think about creativity differently of how do we put things together and how do we continue to make those innovative leaps? Yeah, I think that's a common concern. But I really think that there's enormous ability to engage with the AI in a really creative way. I love the argument, we're the same thing. I'm just stuck on the fish head. Yes, yes. That's a good visual, right? I wanna know if you can have an ongoing conversation like could you have said chat GVT or whichever one you use, why did you have a fish head on one of the characters? And would it be able to answer you? Does it have self-awareness of why it does what it does? No, not really. Even the higher levels of the AI are, it doesn't really know what it produced. And it has a very, it knows on sort of a basic level that I was asked to do this and I did this task. But at the minutiae level of how it put something together, you can ask it the same thing multiple times and it is very often gonna give slightly different responses. And I think what the computer is doing is that so each word has these vectors that are associated with it of numbers and the word ant has 17,000 numbers associated with it and then it's connecting the next one together and the next one together. So each time it does that is this incredibly complicated series of connections that it really can't do twice. And it doesn't, it doesn't save. You can ask it, like the question's about bias. Where might your answer reflect bias? It can do that, but it's not good at looking at it's own. So it remembers the question that you just asked me. So it remembers the question generally. It's not very good at remembering its output and it'll forget the question like four or five queries down, like you gotta keep refreshing it. I think that's actually an important common misapprehension or misunderstanding is that people think, well, it's just gonna do it for me. Like I just do it and then it gives me the perfect outcome. Actually getting something valuable out of it can take some time. So you have to interact really deeply with it in order to get the output that you want, which should make us feel better rather than thinking, oh, then it actually doesn't work. You can see actually it only works really well when it's a relationship between what I know and then how I get out of it in response to it clearly. I gave you the scheduling problem. I set up my schedule, these are the adjuncts. I have to quote them and they can only teach it these times and you can't teach three classes back-to-back and all this stuff and they kept making the same mistake. And I'd say, no, that person can't work after 3P, I'll do it again. Oh, I'm sorry, does the exact same thing again. Over and over again, so yeah. I assume eventually we'll have those games figured out. So, Carrie, one more question. Is it like Clif Notes, could a student go, could you summarize chapter four of X, 12, and do it? Yeah, yeah, okay. Yes, it's very good at CAPT. Yeah, it's really good at it, yeah. That's actually one of my concerns because the things that's really good at is all the stuff you teach them like in K-12. Maybe they won't read it anymore. I'm wondering if they're gonna get to us in a few years without us close to do anything because CAPT would be people so easy. I'm gonna say no. Because, and I see what's happening at the K-12, and to the extent that we were having them do book reports on a book, that actually wasn't probably a great use of their brain anyway. That we were, just like our students, we want to push them to upper level of blows. We want to do that at the lower grades as well. So it's the same sort of issues in the lower grades that we're dealing with in our, we're just ratcheting up the complexity of the content. And actually maybe it will be helpful when they come to us because I found one. A couple of years students aren't good at asking for what, for a question, formulating a question to get a good response back. They find that they don't Google well, right? No, that's a skill, right? Absolutely, it's good. But they're actually different skills. So what makes you good at Googling is like you take out all the filler, you put together a series of things. What makes you good at content engineering is much more context and clues of thinking deeply about how you want the answer and what would best address your question and what is background information and all of that. So yeah, but they're only gonna get good at it if we teach it. They're not gonna get good at it magically. Yeah, I was just thinking, looking at how this is all impacting, emerging new programs and certificates and prompt engineering is like the hot topic among the deans and the provost and so forth. And yeah, it just strikes me as an essential skill that our students are gonna need. And as an example of a, maybe it's a job category, but it's certainly an employable skill that some things will be replaced by computers, but a lot of things are being created by this in terms of employable skills that didn't exist before. So. Well, I want something. Yeah, I'll let you all go, but I'll be here if you have more questions and you're welcome to email me a question.