 Thank you all so much for the opportunity to, I guess, follow up at this point on the faculty meeting that I joined a few weeks ago. So we'll get a little bit more into the weeds with generative AI today than we had the opportunity to do last time. I think that's probably where you guys are at at this point where you kind of wanna kind of think about some specific instructional strategies and start thinking about assignments in your courses. There was considerations for generative AI use that are really specific to the types of assignments that you specifically have. And so it's more useful for me to hear from you and us to kind of riff on that a little bit rather than me giving you some fully formed recipes. So that's sort of the nature of the title of this presentation is really kind of, I love to cook. And one of the things that as you're learning to cook as you kind of start by looking up recipes on the internet and you just sort of like execute those recipes as is as you get better at doing that really, really frequently eventually you start to understand how different ingredients work together and sort of what their function is and all the chemistry and everything like that. You can start to come up with your own recipes. So what I wanna do is rather than give you recipes today start to kind of break down what the capabilities of AI are generative AI are generally speaking and also some of the kind of foundational principles of learning that hopefully you can kind of put those things together and start to think about different applications given your specific context. So we'll do a quick review of artificial intelligence technology concepts understanding the technology also helps you understand what the limitations of the technology are and at least as it stands right now so that's useful to know that. Again, I'll review the generally AI capabilities we'll talk about kind of foundational learning concepts and put those in the context of AI. I know I talked to Stevie earlier and she pointed out that many of you are continue to be concerned about academic integrity. So I actually moved that slide sort of up towards the beginning so we can kind of clear the air in terms of academic integrity right from the start and then we can kind of progress onto talking about some instructional strategies. I also have been doing some presentations recently with students, undergraduate students as well as world campus adult students. And so I'm gonna, I can share with you some of how I'm talking about this stuff with them so that maybe you can mirror some of those strategies yourself and then I really wanna spend at least half of the time that we have here talking about applications of AI in your classes. So I'd really love to hear from you. So a couple questions here just to get a sense of where you guys are all at and maybe you can just kind of jump in and answer this in any particular word that you want. So the first question is, do you feel that generative AI has the potential, by the way, gen AI is I guess what there's people are using in terms of knowing the closest. That's just the shorthand that has emerged. So I'll use that a lot in presentation. So do you feel that generative AI is the potential to positively impact learning or your teaching? Have you implemented new academic integrity strategies from the standpoint of like dealing with inappropriate use? I'd love to talk about how you're thinking about academic integrity within your own context. And then maybe for more constructive use of generative AI, have you started to consider adapting any assignments to incorporate AI? Maybe not necessarily you're actively doing it this semester but you're starting to look for assignments and starting to think about them. This is the perspective that I have. One of the things that students never do is they never provide it and they write it and they think it's done. And I feel like having them generate something with AI and then do you just start with him and help him in the learning? Yeah, from a learning standpoint, what's more useful? Constructing something a fully polished piece of writing or going through the process of improving over time. Yeah, that's really central to generative AI and what it can accomplish from a teaching and learning standpoint. If you're not at the point where you're thinking about this or have any specific thoughts about instructional strategies, what is your general feeling like just from an impact on even society, maybe not even education, but generally in society? Do you have a strong feeling at this point about the impact of, the potential impact of generative AI? Like is this, do you think this is a flash in the pan or do you think that this is, it's a struggle to put it on? Yeah. Not in a way. Yeah, not just in education but probably in all aspects of society. I'm going to get off my lawn in a way. Yeah. Because it should be for a second. And there's a piece to me that does worry that, A, about inequities that could be built into generative AI by knowing we buy the builders. And students already have a difficult time to put everything thinking and evaluating information and I'm not sure how this makes that better. Yeah, it's kind of handing off a critical skill to some dumb machine or even intelligent machine. Yeah. You're going to need to chat with the team. I'm going to read out about having some of that information. Yeah. And the types of information you put in these systems can be sensitive. And that's why I think the general council has been taking so long to come to a decision on this because the way that people engage with these systems might be divulging data about you that they're incorporating into the models and scary things like that. Can you speak to what you're going to say in the folks? Can you just take it out a little bit so that everybody can hear? I think that concern that was just, having to log into these systems and how the systems collect data about you. And the typical sort of privacy concerns, maybe that are exacerbated by the nature of the way that you interact with these systems. To respond also to something, the CVs have talked about inequities and I'll mention this in a minute, but certain systems cost money. So Microsoft Copilot, and I'll mention this in a minute, but Microsoft Copilot just released their AI tools for Microsoft to introduce Copilot, which is a paid tool that costs $30 a month. The paid version of chat GPT costs $20 a month. And the paid version is vastly better than the free version. So if you're requiring this for students, then some students have basically an advantage in the class if they can shell out $20 a month. I mean, she may not seem like a lot, but I don't know, when I was a student, I could barely afford to pay for food for myself. So that's a consideration. The other thing about inequity and issues there is the bias that's baked into how their models are trained and how that comes through in the responses. In that kind of motion, do you think it's a requirement in the system? That question, I'm not sure it's a different model. Right, so Stevie had mentioned that there was a faculty member that was potentially required in the paid version as a material for the class. And that, I guess, is permissible. I don't know, I haven't heard that. And I always wondered about that. So one of the speakers at CWC celebrated last, two weeks ago, they had a virtual conference. And one of the speakers was George Stevens, who was currently doing a lot with AI. And he raised a really interesting point one of the things that he intends to make happen in that AI will allow us to sort of redistribute some of our sort of base-level skills and allow us to sort of focus on one hire or making skills, which is wonderful if that doesn't mean that it's the shift that occurs. The one of the other things that he mentioned was the fact that there is a concern around AI and that there's only approximately 50 to 60,000 people in the entire world who actually become a skill in order to appropriate AI and to help train it. So there really isn't a high degree of intention for biases to be integrated into these systems. Yeah, and once they're trained, once they're trained, they're a black box, right? There's a reason that the data that GPT-4 is based upon is not more recent than 2021 because GPT, at least 3.5 cost about almost $2 million to train in probably several months of compute time to train. So you can't, if there are biases or problems with the data, it's practically impossible to turn that on a dime and fix that issue. And if it does, these models did give back a harmful response. It's hard to even figure out what in the model is causing that response because GPT-3.5 is 175 billion parameters, like little neurons of information. So wherein that model is the awful racist thing that chat GPT just said to you, like, and how do I debug that? It's not like a program. You don't debug it the same way. Before you really just say, I think it would be easier to put something that you just wonder what sort of a difference it is. Do you have that as opposed to the environment? Okay, yeah, okay. So, and the other thing Megan had mentioned was, by the way, George Siemens is a pioneer in learning analytics. So this is sort of a natural progression for him. I will say that it is interesting to sort of get at a college level, we want students to be operating at a higher level of reasoning skills. But you need to build towards that. So hopefully we're not giving away the opportunity to establish those sort of foundational skills in the process of leapfrogging over those things and relying too much on AI. So again, this is one of these societal problems that we need to work out. We've never had easily accessible thinking machines that we can talk to in a really natural way. So this remains to be seen whether what George says is right. But he's a smart guy, so I trust him. Okay, so let's talk about academic integrity for a second. And again, I don't want to, I don't have a lot to say in this regard because I don't think that there are particularly good strategies here, but I will talk about what I've learned so far and I encourage you all to jump in with your thoughts about this. The one of the ones that's sort of obvious is really a good communication with students. So making your expectations about the use of generative AI, really clear in your syllabus, not just in the syllabus, because we all know that sometimes students don't read the syllabus, but actually if you do have assignments that seem to be particularly susceptible to the use of generative AI like a writing assignment that you also incorporate those expectations right in the instructions for the assignment. So it's kind of in front of them at all times. This is one thing that I can't stress enough, the detectors that became available in the early days of ChatGPT all the way back a year ago, they do not work. And even if they do work or purport to work, what they're providing you is a confidence score. So this is a confusing thing. And this is why we told Turnitin that we didn't want to turn on their AI detector at Penn State. One of the reasons is that it provides, it provides a confidence score. Normally when you use Turnitin, what you're getting is like a 50% of this paper has been plagiarized. When you get 50% from those detectors, what that 50% means is it's 50% confident that that was generated by AI. That is not something that I want to bring a student up on academic integrity violation case about, like I'm 50% confident that you use AI. And even just from an intuitive standpoint, sometimes generative AI responses are kind of generic and so you're gonna spot them because it like repeats things or it kind of like doesn't add it very much like personal details into the AI into the responses. So it's really kind of hard to spot just intuitively. And so I can, and I've heard cases where faculty are confronting students with suspected improper use of generative AI or passing it off to zone writing and getting into the sort of circular conversation where the student says, I didn't use generative AI. And they're like, well, this sure looks like generative AI. So where do you go from there? So one of the things, and Steve and I were talking about this earlier today is having a conversation with your students and verifying the student's understanding of that material. So if they did indeed write a paper about this topic, they should be able to answer basic questions about it. So again, it's sort of how much do you wanna trust your students and what kind of relationship you have with them, I think some one-on-one conversations with them about their use and maybe they'll fess up and say, I did use AI for some parts of this. And then you say, that's fine this time, I'll give it a pass, but next time please cite or explain where you used AI. Those can be constructive conversations. And at this stage of society where we're still building these norms, those conversations could make an impression on my student and they'll do better next time. We did develop AI literacy modules. They're at AI AI, that's academic integrity and artificial intelligence that PSU-DU that's linked right from the front of that site. That's a good thing that included the beginning like the orientation part of your course. And you can make that, you can put like knowledge checks in there or real basic knowledge checks so you can make it required for students. And now that way, that includes things like ethics, effective use, evaluating the output of AI so they can get better at this stuff. And you can pick and choose, you can modify those as you see fit however you wanna use them, but they are resources that are built for you that you can start using. The last part is really what I wanna spend the bulk of our time today on, which is realizing that it's very hard to stop students from using generative AI and start to think about meaningfully integrating it into your courses. So I think that that's at this stage of the game and probably going forward the more productive strategy and we're at the invention of the calculator period of time, right? It's like math teachers did not want students to use calculators and they thought that it would atrophy their brains and they'd never be able to use to do math but now they're just able to do more advanced math and we use calculators in classrooms all the time. So that's really the period that we need to get into there. Okay, I'll go quick on this one but these are just a few of the tools some you may or may not be aware of. I think everybody knows about chat GPT. The point that I wanted to make there was just that there is a free version and a plus version and plus version is $20 a month and that gives you a much better model. So there is a huge difference in the performance of these two things and a bunch of plugins that do advanced data analysis and they can take files and allow you to have a conversation with the files and all kinds of cool things. So there's a big difference between the free and the paid version. Claw two, that was developed by folks that splintered off of open AI who developed chat GPT, a really great model as well. And an interesting thing about that company is that they have adopted this thing called the constitutional AI which is basically incorporating human values into the model so that there's kind of as a safety mechanism so a way to make sure that the generative AI is providing your responses that are aligned with things like harmful bias and you know, there's safety related things. So that's good. Perplexity is a newer one to me. This one is really being built as a generative AI for students. So there's a lot of capabilities within that system for like reading research papers and doing data analysis. And so a lot of the faculty that I talk to that are using AI are not using chat GPT, they're using complexity. So that's a good one to check out. GitHub copilot is for coding, Microsoft copilot confusingly is so that's baked right into Office 365 and it will scan a folder of documents and create memos and presentations for you based upon your data. Some of the demos are pretty cool about that. There's an image generation, things like Mid Journey and Dolly. My team within World Campus Learning Design is using Adobe Firefly to develop course graphics. That's a really cool time saver and productivity enhancer for people that work in creative fields. elicit.com is kinda also similar to perplexity is a specifically tool for researchers. So if you have courses that require students to do like literature reviews or you're a graduate student or have graduate students, it's really awesome because you can sort of state what your research questions are in a really human sort of form and what theoretical foundations you're interested in and it will understand what you're trying to ask and find all the papers and then actually allow you to sort of incorporate those papers into the conversation so you can actually like talk to the paper, like ask the paper questions and it will read through the paper and kind of synthesize parts of that for you. So that's a really cool service. I think in the interest of time, I can skip by this. The only point I will say just about the technology and the theoretical foundations of generative AI is that we train these large language models. We expected them to be able to sort of talk to us like people that was the expected behavior of the way that these things worked. However, they do more advanced things like reason and solve problems, analyze data and it's literally just a model that's trained on a bunch of language. So we don't fully understand some of those emergent behaviors right now and that's both exciting and scary in terms of sort of how we shape our future working alongside of AI. Okay, a couple of other things and I mentioned this when I went through the tools real quick, but this is where I want you to start to think about as you approach the assignments in your courses as you're going through revisions in your courses to not just be looking at recipes like pre-baked recipes that people are developing or in generative AI that start to think about what the nature of your content is with the nature of the way that you like to teach is and then think about some of these capabilities and start to marry them together and say, I think that this part of my course and the way I expect students to learn seems to jive with some of these particular capabilities that we know or that these models are really good with. I already mentioned with elicit.com and perplexity summarizing long documents and they can be longer and longer. They used to be like one or two pages that they could consume. Now the GPT-4 can consume entire books, right? So you could actually upload an entire novel and then start asking questions of the model that get into the meaning of that book and the overall themes and whatever. So research papers, for example, are easily within the context when there are things that AI can understand. Some people don't understand that AI can, that generative AI can analyze data. So I've done this within my own research is upload a dataset and just say, conduct the regression analysis on this data and tell me anything interesting that you find. And just with that as a prompt, it gives me all the right metrics, does the analysis and it says there seems to be a correlation between this independent variable and your dependent variable. Kind of stunning how quickly it does that. So if you do have that type of an assignment in your courses, AI can definitely do that and students will be catching on to that fact. It's really good at writing code. So if you do have more technical classes, very few professional coders today are not using AI to code. So that's something to think about. I think expand on short prompts is fairly straightforward. You can ask it a question and they'll answer for you. Some people don't understand that you could actually upload maybe some initial ideas and then to say, with this as a starting point, rewrite this in a particular format, like write this as a memo or a memorandum of understanding or put this information inside of a table and it can sort of restructure the whole document for you. So that's really handy. Again, it can solve logic problems. One example of this that I think Microsoft talked about is that you can go in there and say, here's a list of objects from the real world, stack these things in the optimal way so that they don't fall over. So why would a language model understand how to do that? But a lot of times it gets most of that right. So you can imagine that it's actually doing something more than just spitting knowledge back at you. It's actually applying some sort of logic or reasoning to that process. So you could theoretically get it out of the way to you. Theoretically, you could give it a surpledge of your tromp and it can add expectations of you. Yeah, now the problem is that it might get that wrong, but occasionally it does, but yeah. The other one is role-playing or rewriting with a specific audience. And I'll expand on this one in a moment. Like there's fun ways to use this, like take my essay and rewrite it in Shakespearean English or something like that. But you can also say, I am a budding scientist. I don't know how to communicate like scientists communicate. Can you take this paper that I wrote and rewrite it for me in more formal scientific language? And it will help you do that. That's really cool because you're expected to communicate effectively in that context as you go out into the professional world. And so those norms are important for you to learn and that's one way to learn them. So one other thing that another thing will be one more thing alongside of that in the future is that I've heard about neurodivergent students who take long, hearing-rexed information and then plug it into a tea and then tell it to summarize it into like bio-dependent more points. And it does that as well. You can also for them summarize the information or leave it your way. Usually it's very accurate. Yeah, good. So for those who couldn't hear that from neurodivergent students to be able to just re-structured the information in a way that better suits have a process information. The other thing where I thought you were going with that is also from the communication standpoint if you have a hard time communicating yourself clearly maybe because of neurodivergence or more or if you are English as a second language, right? That's always a problem. So you can just say, I'm gonna do my best to put these ideas together but I know it's broken English, rewrite this more clearly for me. And that's beneficial obviously. That's a good thing to be able to do that. So those are just keep those in mind. Those are some basic capabilities of AI. This is kind of the way that I'm starting to think about this now just to start to serve as a framework for how we can think about updating our instruction is to start thinking about sort of foundational learning concepts. Regardless of AI and specific applications this is like a hundred years of learning sciences things that we know about how people learn and how the brain works and how we process information. I just wanted to list a couple of these things off and then to some degree put them in the context of AI so you can see how these particular concepts related to learning might be relevant here. This is not an exhaustive list. I think that the good books that Dutton can help you to expand on this and really start to break your intended learning goals out so that you can start to think of as AI applications but this is a starting point. The point is it's a rough framework for you to sort of structure your thinking. So I'll start with a couple of things I'll cover here is self-regulated learning so how students monitor their learning and reflect and build self-epicacy about their own capabilities, social learning like how we collaborate and learn together, how we go through the process of problem solving and the process of learning how we take kind of initial maybe ideas that we don't fully know how to express in full level of detail that's expected of us. Elaboration helps us to expand on ideas and introduce new concepts and connect them to prior concepts and that's really an essential part of learning and also communicating effectively and then knowledge building just how we operate in a knowledge oriented society and workforce. So elaboration as I just said adding new information to more fully understand a concept connect the new information that you introduce with prior knowledge that helps you to retain information and helps you to recall information when you connect it that way. So the exercise of taking a simple idea and elaborating and adding new concepts is a really a basic and foundational part of learning, right? So how does this connect to AI from a student can provide the starting point, right? So they might start off with a naive idea about a particular concept in your course but they need to articulate at the very minimum at the starting of the course that naive idea and then push the AI to ask for additional details and they can start to get into what's called prompt engineering so ways of asking the AI targeted questions to expand those ideas in a particular directions that are aligned with sort of where they're thinking is that kind of do that in iterative fashion expand out on ideas that are thin and really start to come up with a more robust response maybe it's an assignment. So basic ways that you can talk to chat GPT for example is here's my initial idea and that might just be a bunch of bullet points or an outline, right? And then say what are some of the related ideas that I should consider and then it'll start getting you going and you can revise that and say, okay here's my draft incorporating those ideas I still think it's thin in this area let's expand on this a little bit more and through that iterative process you can get more and more sophisticated with your writing. So instead of again to Stevie's point of like not asking students to turn in this sort of polished writing but actually going through this thinking process over time you can kind of reconstruct the writing assignment in your course to include those iterations. And then a lot of times and this will be common to a lot of these different strategies is to turn in each of those iterations and kind of explain like this was my initial idea this is how I worked with AI to expand on this this is what it provided this is what I synthesized and threw away and kind of instead of turning in that final product we actually turning in the conversation and that really demonstrates your thinking in your mastery a lot more than just a final product. Self-regulated learning, right? This is another thing if you have done any work in the learning sciences you know that things like monitoring and reflection metacognitive developing metacognitive skills are really essential for students. In order to learn you need to understand where your misconceptions are and overcome those misconceptions and part of being able to do that is monitoring your thinking through reflection externalizing your thinking so that it's almost like in an object in front of you that you can look at and say this is where my thinking is at right now clearly I don't understand this part of it and again through iteration with the AI start to overcome those bits of misconception and because this is just you and AI as opposed to sort of a group setting or a group work setting where students might be embarrassed about their lack of understanding you can sort of be super honest about what parts of that you don't understand and chip away at those things over time through a back and forth conversation. So really developing metacognitive skills and really just unflinchingly facing your misunderstandings and overcoming them is I do that all the time I'm working on my PhD right now and there's very complex concepts that I need to work through and it's really nice to be able to talk those things through with AI and it's a really great job in those types of conversations. So I think you can again sort of get a sense of an instructional strategy where you're asking students to go through that iterative process of idea development and reflection and refining. This is just a real quick note on self-efficacy is self-efficacy is sort of a student's own confidence about their ability to accomplish certain learning tasks. Again, this is a precursor to learning so if you don't have a lot of self-confidence you can't really learn effectively you're always gonna be sort of timid about approaching a subject. Again, AI is a safe place for you to confront those feelings of inadequacy that all students inevitably have and kind of boost their confidence and you can start to feel as you work through those difficult ideas that those things that you can kind of viscerally feel like you're stuck on a particular idea as you overcome them, you build confidence and then when you have to do a presentation or write a paper that you actually turn in you've worked through those problems and built that self-efficacy. So this is a really wonderful thing that really there wasn't a way for students to do that easily before. Social learning, this is something that I'm studying quite a bit, how students collaborate with each other. I think there's all the way going back to Lev Vygotsky and the, I don't know Stevie do you remember or Megan Vygotsky is like the 30s or 20s or something like that. Like a long time ago that we've been studying this stuff 100 years. Yeah, all learning is social. It doesn't happen in a vacuum. The interesting thing about natural language processing and large language models is they've talked to you like a person. So you can start to think of them like a learning partner instead of being just like a tool like a smart search engine you can actually engage them as like a social learning partner. So the same sort of things that you do in a group project you could do with AI sitting alongside of you and engaging in sort of really natural discourse with the AI. You can get AI, this is kind of cool to adopt a particular persona. So again, if you're in sciences and you wanna start to learn how to talk to scientists effectively because that's what you're gonna do for a living. You can ask AI. I wanna have a conversation with you about this topic. As you talk to me, talk to me in a way that you would expect the scientists to talk to me. And then you can start to adopt some of those patterns of speech that are norms for a discipline that you need to adopt. And so there's AI literacy and other things that you can build into your courses to provide some scaffolding for students to engage with the AI in this way. I think we talked a little bit about analysis already. I think the interesting thing here is if you have something like data analysis or other sort of technical work that you expect students to do, there are particular discipline specific ways of approaching that analysis. And you can have students through trial and error build out the statistical skills, build out those analytical skills through conversation with AI. It can, you can, after the analysis, when you write up what your findings are, you can put that in the chat GPT and ask it to analyze to see if there's any sort of logical areas or inconsistency, internally inconsistent issues with your writing. If you don't know how to analyze the data in the first place, you could say, here's my data set, here's the qualities of data set, here's what I wanna get out of the data. Again, this is something that I do in my own research. And then it can suggest procedures like particular statistical techniques that you might use to analyze that data. And it's usually pretty awesome. In fact, I've gone in whole different directions with my research because of things that chat GPT suggested as a way of studying data. So that's pretty cool. And lastly, knowledge building. I mentioned this already. You can think of generative AI as like a conversational search engine. So where you have assignments where you have to do something like a literature review, instead of doing the literature review, maybe, and I hate to say this, if any library colleagues hear me, I don't know what they're thinking about this, but instead of going to like the Google scholar or the library's website to do your literature review, just go to elicit.com or perplexity.ai, then you can not only engage with that sort of search in a really organic sort of way, but then as it returns results back, you can take those papers and then have conversations with the papers. And so you spend a lot of time as a graduate student or a researcher reading papers that are not relevant and that you find yourself an hour later, like why the heck did I just write that? Read that paper that was useless for me. Instead of doing that, you could take five minutes to ask targeted questions in the paper, realize that it doesn't align or maybe that that particular position is not really what you needed, but something related is and kind of go off in another direction with it and do that 10 times faster than you were able to before. So in those places where knowledge, building and research are parts of the requirements of the course, this is a much more productive way of doing that type of work. We have 17 minutes left. I can talk about some of the things that I've talked to students about, but I think it's more just iterations of what I'm talking about here. So maybe we can pause. Oh, well, there's a question like this. Welcome to keep you on. Hear me out of the seat up. Very lovely. I want to grade when you want students to predict something, right? Could this be, could a general AI what we use to help them predict something? Like they can put in a prediction and then the general AI and let them know that their prediction is adoring and accurate, tweaking it just to get them a better at learning those some patterns that you might want to learn. Yeah, so extrapolating on something. Yeah, maybe. I mean, the mathematical answer to that is it can do regression analysis because I've done that. So that's one statistical way to do predictions if you have data. But I don't think that's really what you're asking. Maybe you're saying like with the situation. It's a lot of weather. What would your forecast your particular forecast is? I would say I think it will be missed after this environment. Yeah. How is it different from what you expect? I would say so. I mean, presumably somewhere in the training data somebody shared an analysis like that and basically what it would be doing in that case is saying statistically most of the time when people are searching for this particular answer this seems to be the result. So if you're seeing these conditions then probably it's associated with this outcome. I think it remains to be seen how creative general AI is. But then I would also say how creative are people. Like the things that you think are creativity are really you're just building on the shoulders of giants, right? So. Well, another example I don't think that it's a big material sinus if you have students working in a particular material and they know something that's properties and you will predict what it might do within those particular names and... Yeah. I'm assuming you've got to be able to map but also there are some materials that they should just know those properties is to say, well, in this instance it's good for that Yeah. Yeah. And I would say at the very least maybe you've probably if you're building a bridge you shouldn't ask chat to if you get asked you had to build a bridge because it might collapse very quickly. But at the very least it gets you pointing in the right direction. So you might, it might say I think that the best way to push that problem is this or I think that the answer is this but here's a couple of things that you could research and that's more productive than just throwing darts to the dartboard or just taking shots in the dark. It's like directing your thinking in a more productive way. So I think it's worth doing but this is what I always tell students don't immediately like use your critical thinking skills don't trust everything that these things are telling you because they're not that the problem and you've all heard of this idea of hallucinations is chat, GBT and these other models will very confidently give you bull crap. Like, and it will sound extremely convincing and it will provide all sorts of supporting detail but it's just throwing the language together. So I think everybody heard of it. I was playing around with the chat to see the one thing it gave you the answer that had to do with learning styles. Oh. It's so exciting. I said back in, you know, learning styles are a myth. Now what is the response? And then he came back right away and I guess there actually is no evidence for this and then I kept asking him questions and I'm not much off earlier learning styles back in the number of stations. So. Oh, interesting. Yeah. Yeah. So like even me sort of correcting it wasn't enough. Rich recognized that learning styles are still on the ballot. So yeah, I mean, it's really interesting to see the information that you don't know your information is all about. You can get yourself into some real challenges where I could have taken a stand and been on AI in education and you really need to maybe see some wear of this and teaching them how to use it appropriately in the classroom. Because otherwise they could end up citing incorrect information and then getting it back-grade. And it can become a pretty Trump hit ball with them. Yeah. But that's not inconsistent with the way that we, like if you ever went to a library session where you sent students to the library to talk about how to like critically evaluate your sources. Like don't believe everything that you find and that that's the same thing but it is like tempting to believe it because it sounds so confident in what it's saying. Which can I ask what tool you used? And that's the free version of the paid version. So yeah, well, it might be something that's overcome in the paid version because these models do have memories, right? So if you do instruct it in some way in the beginning of the conversation it should have a memory that persists through the whole chat session. So if you say that's not true, stop saying that it should stop saying it but that might only be in four and not 3.5. So yeah, all models are gonna be equal. Just trying to put together demographic questions and I haven't done that for a long time and I think this should be a long part of it. You know, right there in the meeting, you know how that stuff should be worded as I read through 3.5 and it did an excellent job for the questions for me and I just put against some good things about it and it seems to work fine but it remembers that I was working in a survey from all the questions to the names. Yeah, yeah, that's useful. Now there might be like new trends in those that sort of language that have come out in the last two years and then it's not gonna know those. So that's tricky. By the way, one other thing in terms of shaping its responses I know the paid version I'm not sure if the free version has this but if you log in you can give it what are called custom instructions and they persist and are incorporated into every single response. So in my custom instructions I said I am a learning design professional. I'm also a student studying learning sciences. I do this type of work. So please provide responses that are consistent with those facts about me. And then I also said in every one of your responses provide citations and I just put that as the custom instructions and then I don't have to say that all the time. And so every single thing that Jack GPT tells me it provides a citation for it and the citations are usually pretty good. So it's actually verifying what it's telling me. That's pretty good. I'm not sure if that custom instructions is in the free version. You can try it but it's secondly in the paid version. So it gets the prompt engineer, right? Like what are the strategies that you can put in the way that you articulate your prompts such that you're getting consistent results back. What's, I think most of these are we've kind of talked through some of this stuff. Again, these are some things that I've talked to students about is maybe this will start to shock some of your on thinking about strategies for the classroom. We talked a lot about writing. This is a writing assistant. So, and again, just think about me talking about the stuff to students. So like how can you use generative AI effectively as a student start with a good outline and then feed that in the chat GPT and get it to elaborate rephrase. I already mentioned some of these examples, rewrite it make this, this is really helpful if anybody basically right make this more concise add some variety of that vocabulary. Those are the instructions that it'll take create a memo, put this in tabular form. So restructuring basically if you're if you have writer's block this is a good way to kind of dislodge you from your writer's block and get into some more like the flow state, right with your writing. This is really cool for Claude and GPT plus and complexity. You can upload files. So you could upload your course materials and you guys all your course materials are open. So you could download all those things upload those files into chat GPT and just say create quiz questions for me. I need to practice this material and it'll actually generate like flashcards or quizzes, quiz questions for you that you can respond to. I don't know about copyright. I mean, I mean, you don't again, I'm not sure if they feel like it's necessary to write new policy because a lot of this stuff is already in policy. Like don't give away, don't break copyright but giving away information that you don't have the rights to use that way. And I'm not sure with the sensitive. I don't think that's a good idea. I think it's a good idea. I think it's a good idea. I think it's a good idea. I think it's a good idea. I think it's a good idea. I think it's a good idea. I'm not sure with the sensitive data. I don't think the sensitive data, are you familiar with, I think it's 80, is it 80.95? Is a sensitive data document which includes the data of classification levels. So level one is like the least sensitive data. It's just stuff that's already publicly available. And then two is where there's personally identifiable information. And then three is like hyper protected data. I'm not sure that those data classification categories include copyrighted material, but I bet you that they do. So be aware of that. But it remains to be seen how they're using this data. Like if you uploaded a book that was copyrighted, it's just gonna incorporate that information into its model, but it's not really gonna store that whole book as is. So I think there's about 25 different cases being argued in federal courts around this country regarding copyright and fair use. I think that there's not legal precedent right now for a lot of those thorny questions. But in any case, if you do have the rights for it, it's a really great study ahead. Chris, can you even share this slide so I can share it with you? Yep. Yeah, I always forget to do that. So just follow up with me and I'll give you. Okay, I think that we talked about this already. By the way, these images almost all, I think all the images that I use in this are all generated by Dolly, the AI image generator, including this one. Like I literally said, create a cartooning image about academic integrity and it came up with this, which is like kind of this anime thing that did not exist before I put that prompt in the Dolly. So it's neat. So again, plagiarism we don't know is copying something out of ChatGPT plagiarism. It's kind of hard to say or we need to update our definitions. So it's complicated. Sight your sources if you use ChatGPT. APA has ways of citing ChatGPT. So always cite your sources including your sources, ChatGPT. I do tell students the detectors don't work but as a budding professional you should probably develop some ethics of your own including being honest in these cases. So maybe I don't know if that's a good strategy for students but be honest. As I said, there's copyright various precedent right now. So a lot of stuff is being worked out. Be careful of harmful bias or fake information that's being incorporated through who is nations and these responses. If you just copy and paste this stuff out of ChatGPT into an assignment and I'm talking to you as a student you're ultimately responsible for that. So if you say something awful because you copied it out of ChatGPT and didn't actually read it thoroughly yourself you're responsible for that, not ChatGPT. And then I already mentioned sensitive and protected data. So I think with the last couple of minutes our instructor think about specific courses that you are either doing design with or teaching in given those capabilities that we talked about and given some of those kind of foundational learning concepts what are you thinking about some modifications to assignments? Like let's assume that your assignment is a writing assignment or some kind of data analysis assignment that you know pretty well that students are gonna use ChatGPT to help them. So you don't wanna just sweep this under the carpet but then it doesn't exist. How can you modify it? I'm slowly thinking about the iterative process that makes a lot of sense. And by the way for this iterative assignments too one of the problems with that particular model of instruction is that it creates a lot more stuff to be graded to. So you're potentially creating a ton of more work for the instructor to deal with. So how things get assessed is it gonna hold another question? From an instructor standpoint though you already know how to use it. You could also use it as a way to generate a rubric to hook it into the grading. And how to get a rebound on the things that you are thinking are important in that assignment. I think that's a wonderful idea. And I find writing rubrics extremely tedious. So having me, I do that for me and really customizing it to the assignment is great. Or if you're tweaking your course proposal for the faculty senate, you know, if you know what you kinda wanna teach but you're stuck on one of the different topics do you do under this gender enforcement process? I actually put it in title for a gender enforcement maybe of a syllabus is actually not terrible. And then it put in a list of topics and asked for the first time when it came out with a grading. So I mean, if you go either way for support for the faculty members as they're doing kind of the others. Yeah. And in the amount of time it would have taken you to write that you could get a result that you're not totally satisfied with and just make it iterate a hundred times on that until you get the one that you like. Can you just keep tweaking it in that same amount of time? By the way, just I had the whiteboard in my office with a bunch of like diagrams and ideas for data analysis and little key terms for my research and stuff. And the one thing that Chief BT just added a couple of weeks ago was called vision which you can upload an image and it can make determinations based upon an image. All this was like my left hand I got terrible handwriting cause I'm left handed it just like my scribbles on a board. And I took a picture of my whiteboard and I submitted it and I said, tell me what this is about and anything like interesting I can do from here. And it totally gave me a, I said, it looks like you're probably a student or researcher studying knowledge building or something related to learning sciences. These are the methods that you're using. I would suggest that you use these methods instead. Like it was creepy how well it did. So yeah. So I wonder about the demographics of who is using that. This really looks like a rich getting richer because if I am at a certain level where I'm going to start using it I'm going to start on it academically, professionally, mentally. So the person would not be using chess because it would be for many seconds. So and that goes to the equity above all else. I mean, we can't control it, but it just, I started by saying it's a tectonic shift but it's going to be a tectonic shift for certain people and not others. Yeah. I mean, we said to people who are women and they argue that everyone benefits. So, okay. And I don't want everyone to benefit from this, but there are people and some of our students are going to be less able to take advantage of what's here unless you can actually play with our classes and teach them to use it. Yeah, for sure. And there's always been those inequities in society and technology exacerbated this. Just people who get access to technology earlier in their lives and develop the skill sets are more capable of operating in a knowledge society than students that didn't. This is just an order of magnitude worse than that. And I think it's going to be very soon. We're going to get to the point within the next decade where if you don't know, there's going to be people who know how to use AI effectively. And if you're a part of that population of people who didn't get access to it and don't have those skill sets, it will be impossible for you to get any of those sort of knowledge related jobs. But that inequity will be that huge. So, talk to the dot and folks to get further consultations. I really think that the most useful way to apply this is in practice to just try it. So, good, work with instructional designer. You're welcome to reach out to me as well if you want to balance some ideas around it. And I'm kind of collecting ideas. I've modified assignments as you start to embrace this stuff. So, if you have examples that you want to share, I'd love to incorporate that into my own knowledge base. So thank you so much for your time. Yes, thank you, Chris, for leading this really interesting and engaging session today. We really appreciate it. And for those of you who are following along online, there are some links in the chat window if you haven't noticed them already. Help us make sure that these sessions are customized to your needs. There's a link right there for you to go ahead and fill out a survey. It's a very short survey, I promise, but it's just to help us get better insights into how we can serve you better. Also, consider joining our Teams channel. Just stay up to date with the conversations that are happening here on the Dutton Learn series. Lincoln is also in the chat window there. And our next Dutton Learn session will be focused on supporting neurodivergent learners and will be releasing the date, time, and location information for that session coming soon. Thank you so much for joining us, everyone, and have a great rest of your day. Bye.