 Well hello everyone, I see our audience are still joining in so I'll give it just another minute or so and then we'll begin. Okay, so hello everyone. It's my pleasure to welcome you to the teaching in the age of AI's panel discussion today. So this is the first of many conversation AI that we hope to have at the University of New Mexico, as we consider the impact of this emerging technology on higher education. So my name is Leo low. I am the Dean of the College of University Libraries and learning sciences here at UNM, and I'll be moderating the panel today. I would also like to thank Pamela Chi, our associate provost for student success who has been instrumental in getting this panel set up. So the launch of chat GBT by open AI in November last year, which now seems so long ago, but it's only just been four months or so. There isn't such a buzz or shock among everybody and especially educators, and we believe it is an opportunity time to explore the use and development of AI in an academic setting. And it's important to consider both the potential benefits and challenges associated with this technology. So here's an outline of what we're trying to cover today. First of all, you know, what is chat GBT, the potential of AI in education. What are some of the strategies for teaching with AI and definitely ethical considerations of AI in education, and we will leave some time at the end for q amp a and then we wrap up with, you know, some next steps. And we are very excited to bring together a diverse group of experts to look at AI in education from different perspectives. So let me introduce our panelists today. Next we have Lydia, the chair of the computer science department at the University of New Mexico. So leaders primary expertise is in the integration of machine learning for the planning and control of automated motions and tasks in robotics and computational biology domains. And she's currently serving on the National Academy panel on using machine learning and safety critical applications. Next, we have Victor law, the program director of the program of organization information and learning sciences, and Victor has been conducting studies examining the effects of different scaffolding approaches on students complex problem solving learning outcomes. And we have Adrian fast, a student that freshman majoring in computer science who is currently involved in research of AIS and new networks. And we have Lori Townsend, learning services coordinator social sciences library at the university libraries, and Lori's research interests include cultural humility, genre theory and information literacy and undergraduate understanding of digital forces. And we have Ian Heninger, University assessment specialist with the UNM Office of assessment. Ian has previously worked as a librarian at junk lecture and instructional designer. It's extensive experience in teaching assessment provide valuable insight into how we can best integrate into the learning experience. And finally, we have jet saying instructional media specialist at the center for teaching and learning. Jet is also a doctor's candidate whose research areas include teacher feedback and culturally responsive teaching. So welcome panelists. Now, before we begin the discussion, I would like to invite our audience to participate by submitting your question using the q amp a function, and our panelists will be able to see them and possibly answer them right there. And if not panel will help us group the questions and at the end of the panel during the q amp a portion, we will have about 10 minutes or so to answer questions as well. And if we run out of time, we'll save all the questions and try to answer them afterward. So, let's begin our discussion. This is a question I want to ask Lydia. So, Lydia, in lay person's terms, can you briefly describe what this jetivity and how it works. Definitely, I'm super happy to I'll go to the chair a few slides. I'm just to make it a little bit easier to walk through the technology. They are very straightforward though, so don't worry. Okay, there we go. Okay, so I'm going to take you to answer that question I think it's worthwhile going back a little bit in history and thinking about what got us to this place of chat GPT and how this new model has now taken over the conversation of the world. We're not going to go back that far. But the general idea that that you should be able to come away with this is that when we're building a general conversational agent, which is the goal of chat chat GPT is that bigger is usually better. We want some kind of large language model and that's been driving the research in this area. And this is why chat GPT seems so amazing and great because we've been able to grow these models so large with with resources that we haven't had before. So, it's really started about four years ago with this simple little box here on the right transformer model. Before that we had different architectures to be able to do AI that you had to think about what window input I had to give a certain input to the system. Now I can just pass it everything and it can encode that information, do some processing on it, transform it and then decode that output for us. So this transformer models really change things in the last few years. And the way it changes may this let us allow us to build complex models to provide our better intelligence. And I'm going to walk you through some of that history of just how it's changed in the last four years. In 2018, we built a complex model what we thought was a complex model that 100 million parameter size, that's really large for these AI systems. And then we continue to grow it even that same year, you know, over doubling the parameters that we were able to model in that AI system. So we went to 1.5 billion parameters, then we were able to grow that even higher 8 billion parameters, the different papers that were published about these that led up to chat GPT where we are now 18 billion parameters in 2020. And then chat GPT the basis of GPT three which is the basis of chat GPT came out in 2020 with 175 billion parameters. It's a huge model. So even two years ago we're like, we need these huge models to build conversational agents. And it required extensive computational power. 10,000 and video GPUs is the highest and GPUs that cost about $10,000 each imagine $100 million of computation system to build this, and about estimated 12 gigawatts of power to train. And think of that as about 12 hours on a nuclear reactor. So two years ago we didn't think these things would be widely used as they are today. So you're probably thinking, how do we get to the, the place where we are now chat GPT just two years later, if it required this much computation power. So you think this took this idea slightly different. And they said we want to be big, but not just to continue to grow better but not bigger. And so they scaled back the number of parameters and systems you think of it it's taking that huge large AI system and kind of making it smaller, but they really did it in this unique way. So you think about is like what gives like how can we have jet GPT that's disrupting our educational system, but now it's even a smaller AI system that we had two years previously. And how they're able to do this is by training with human feedback so if I take my system from before. And rather than just trying to make it bigger and scan the web more. I take a human and a human input as to what it should learn and get some information for a particular prompt, what it should do to output in that system what output should that system provide. Now this is a super old school AI idea where we can do supervised learning have a human supervised that learning system to make it learn better. But if you do it at scale with 13,000 input output examples is what chapter GPT did, then you actually get something that starts to think and answer more like a human than the original GPT three did that was just two years earlier. There's other things we can do. So it gives you a similar simple model, this kind of framework for different kinds of question prompts that we didn't have previously. But it doesn't give you a lot of generality right if we're only training on a small data set on a smaller system, we don't have the generality that we see and are so scared of in chat GPT. So we really takes additional human feedback for refinement, and the system that can actually start to teach itself in order to continue to get better. So this is what chap GPT does that did not exist previously well had existed previously but not within the GPT system. If we take a system that can take in an input and set output some outputs. And then we asked the human. Which one which order is out good or those outputs which order would you put them in terms of which are good and which are bad. Then we can use that to train another AI system that will start to learn what a human values to the human value response like this to the human value response like that. So let's take those orders of those input those outputs from the other system and use them as inputs the up to the reward model. And then it'll output whether or not a human thought that was good or bad. Pretty simple idea. I put this in gold for a reason. This is a really hard problem, because you need a lot of human input to be able to determine this. And this is what chat GPT did like it took a lot of human input. To make this model of the reward system for the human so we can use it to train the system itself. So what it's a once I have this reward model, I can pretty much have chat GPT, which is how it got so good is figure out how I should start to respond, given what a human thinks is good or bad. There's some some developments here where I can refine that reward model as I'm going on the fly so I can start to learn better what humans like and don't like. And I can continue to refine my original chap GPT system so I can figure out a better way to respond given certain prompts. One of the answers your question, how we got this position where we are in two years with just remark this this application of technology and use of human resources and computational resources. That's great. Thank you. Oh, go ahead. You're welcome. Oh, no, that's that's your answer to your question. Yes. So, in a way so chat to me now is that they the CEO said this is kind of like a research demo. So we are, we in a way helping them come, you know, learn a little bit as well with our thumbs up thumbs down with our with the responses to you. So we started fixing this reward model. We're changing that what humans tend to like and don't like. And then we can use that to refine the, the policy is what it's called that chat GP is using to make responses. Okay. Now, many of us, I think most of us have played with it quite a bit. And so I enjoy experimenting with this, but can you tell me from your perspective, what is something like this chat GPT is good at, or I mean putting quotes good at all bad. Exactly. So I love this question. It's like the first one we handle when we teach AI class here at UNM, because it's the first thing you need to think about when you're building an AI system, how are you going to evaluate that system. So I like to start with, with having people start to think about how to evaluate those systems. And when we, when we do AI class it's kind of fun to think about these because we can see if our AI system is smarter than a human. This is one of the easy tests that people use him does it outperform a human. We can also take prior intelligent methods like GPT three, any of these other earlier systems and compare can chat GP to it, and see if it's doing any better than intelligent methods. And then if I have a mathematical or some kind of hand tune solution, I can also compare against that. And if I do better, sometimes we call it intelligent as well. These are kind of the metrics we tend to use and you'll see these thrown around when you're looking when people are talking about AI being good at things are bad at things. So that's why I wanted to cover those real quick. So if we think about these metrics and what chat GP is really good at people rated it very helpful. You're able to give it a instruction, even sequential instructions which is amazing for AI systems and that's better than any existing prior AI system so even going back, you know, just short history of four years or even before that, how helpful is the system rated compared to other systems. There's also been there's some existing testing data sets for telling the truthfulness of the system. So this was something that the inventors and chat GP he thought we're very critical to evaluate on the system. What they found was that the chat GPT system actually ranked more truthful than existing systems for now we'll take this with a grain of sod as we keep talking about chat GPT but this is one of the things that they have seen. And to compare it to humans, some of the earlier systems so even this GPT three that chat GPT is based on. We can take writing samples from GPT three and chat GPT and compare them to writing samples from humans and do what they call as a little Turing test. I think it was written by human versus the machine, and most of the tests that have been done with GPT three and now is living looked at with chat GPT. It's being rated for short writing samples is typically what they use more better or more human like or as human like it's human. So it gets really hard to tell whether it's human or not. So that's some of the things that we know that chat GP is really good at. So the next question is what is it really bad at. So, as we saw is it's driven by the human reward system, well whoever those people were that open AI paid, there's been some studies to show like exactly what they declared what company they use but where these people were and how they defined what was good and what was bad. So just labeling of what's toxic and not could be leaked into the reward system of the AI system. And now it can drive what those responses will be. So, what's been seen is that, you know, we, because of that human driven reward system. There can be issues with the toxic city of the responses from chat GPT. And this has been shown with the system and and other ones as well. Another thing that is really bad at, and most of these complex AI systems really have the suffer them this problem is that it's highly unintelligible right I can't tell exactly how it came up with the response. I have a reward system I could distinguish a little bit of why that maybe that reward system is off, but it's really hard to tell with the complex large parameter system, exactly why it's producing those responses. So interpretability is really hard for for these systems. I thought this was funny and I wanted to add this in GPT three was actually shown to be really bad at common sense physics. And the reason this is things like that it maybe hasn't even seen before, but maybe nobody's even really asked like, will cheese and melt in the fridge. It's not going to be a question that's not going to necessarily know. And it's going to be hard for it to answer. But these kinds of questions are really hard for these AI systems because you have to take things. But really nobody's tends to spend some time thinking about because it's so common sense. It's really hard to distinguish from a system. What is new versus what it's trained for. Because we get really worried in AI systems that maybe AI going to take over our job or do something for us. And it's really hard to tell if it's just exceed that experience before, or if it's something new or come up with some new interpretation of it. It's kind of a emergent behavior. So this is really hard to tell from these systems as well. And then this last one is the one that kind of scares me a lot thinking about these ideas of safety in AI systems. And for us as educators, you know, this is also an interesting point as well and it's called hallucination. So in AI systems when I make up an answer that is just not true, but I take stuff that sounds like it makes sense and I put it together. And so that hallucination can be these AI systems are have a hard time because they're just working off these rewards to figure out what they're saying is actually true or not or if they're just taking things that seem to go together and putting them together. This affects things in real life. So if we have a system that's making up answers, it can affect our safety or what we're answering. And we get no guarantees. So if I'm a student using it to answer my question, I may get a right answer, or I may not. And it can be really hard to tell the difference from these systems as well. Thank you so much. That was such a great overview. Honestly, I understand it so much more than just a few minutes ago. Thank you, Lydia. Thanks. Yeah, now that we know a little bit more about chat GBT, we can think about, you know, okay, what's the potential of AI in education. So let me ask Victor. No, as a learning sciences faculty, what are the potential benefits of using AI in in education. Thank you, Leo. May I share my screen. Yes. First of all, I apologize. I thought that I had 10 minutes but actually I need to share with Adrian so I will rush through my slide. So my background is instructional design technology so I still a slide from my instructional technology, instructional design class. So it's a general like Eddie model to think about like how we design instruction, which I highlighted the term learning objective is more all about this. What will be what will what do we want our students to learn it at the end. And then we have to align it with the learner with the context with the task and then with everything else. And then, when I think about AI or chat GBD. Yeah, it fits into many boxes, such as the context, because now our students are able to use chat GBT as part of the toolbox that they try to understand whatever questions that they are given. So it's part of the instructional strategies because we can incorporate the church BT into it to any part of the assessment to because students are able to use chat to be to produce some of the homework or even the exam so what how should we do in the assessment that is probably one of the biggest questions in the in the education world. Yeah, but like, I will not spend too much time on it. Today. So, I start to experience like church BT and my first reaction is wow, it, it is amazing. Yeah, I always teach my students that computer doesn't understand English. It's like when you type in anything's like in the let's say yesterday I teach students like use the light library databases, like those databases that doesn't understand English. So it takes whatever we put in. But now like chat chat to be the like, kind of understand English, and it is very good at giving us some good in our informations for a research topics. I think it is something it is really good for us to kick shot a research projects, and to see like whether we may miss any important or good perspectives. So this is a screen that I copy from church BT that I use. So I asked church BT, please describe theories regarding the help seeking which is a topic that I'm interested in now. So it was pretty good like it's give me quite a lot of theories that are relevant. Yeah, for my like for my research topics, and many of those are really good. So it really helps me to move forward regarding to what what I should consider. Yeah, for my research topics. So thumbs up great. My second like and I keep on playing with it. And my second reaction is what it can do this. Yeah, of course, is the information church BT gives can be fake. So this is one of the questions some some some people like from the from the audience say like, How can we know like whether it is a fake information or not. I do not have an answer. So maybe Lydia or somebody like who knows like can can help us. So the basic idea is don't trust the answers of church BT. It's kind of how we teach our students like when we go to Wikipedia, like go to Wikipedia to look, but don't just trust it. Yeah, because those information may not be right. I think for church BT, it can even be terribly wrong. So this is one. So this is another things that I put on. I want I'm doing a meta analysis. So I say like, whoa, can we can you show me some meta analysis articles regarding to help seeking. When I first look at it, I say, Wow. So it's something really good. Yeah, it looks like something I can, I can even borrow like in like in my writing, of course I can, I should not be patronized, but it should be something that I should borrow. But when I keep on doing it, I realizing that those citations are fake. Yeah. So it looks really, really good. Like the title is really good. The journals or the top journals in my field. The people that that wrote are some people that I knew like I read the articles. Yeah, but I put the DOI the deal I doesn't exist. And then I went to the journals like the year like it is not there. I even went to the those, those authors site, the websites to see whether they actually publish anything like that. No, they didn't. So the point is, coming back to this one. Whoa. It looks like they actually published some days. And but the result is that they didn't. So if students actually use it in the assignments or in the publications is terrible. It's a terrible results. So what I want people like especially students to know is be careful, be careful of what you are getting, because whatever you get. It can be terrific. But at the same time, sometimes it can be terrible. So one of the questions that people ask is should we or we should let our students just church EPT or AR in classes. I learned this from my finance class when I took my MBA. The answer is always depends. So it depends on what is coming back to the instructional design 101. It's the learning objectives, the students, the context, the task that our choices of instructional strategies and the choice of assessment is all of them like when we put them in the in a holistic sense, then we can decide like whether chat GPT is is a good one. So I will not do too much about this because I know I don't have time. I want to bring up a concept by vikersky regarding to more knowledgeable capable others. So in the education realm like a lot of time we think about a teacher and expert and older sibling or even appear can be that can be that more capable others or more knowledgeable others in for instructional technologies. We have been imagining that technology can be that but a lot of our in 2000, like suggesting that there are tons of limitations at that time. And of course it is 20 years later. Chet G bd can do a lot more. So, but we also need to understand the limitations of AI as media share with us and we will know a lot more in the future. So what can we do with AI. This the questions that Leo brought up. There are one suggestions, of course, they are many things, but I can think about is peer learning. So, as I talk about using the concept for more knowledgeable or capable others. The AI or chat GPT can be that one. So when we, if I were a students I want to use the debate as one of the learning tools. In the past, I need to find somebody to debate with me. Now I can debate with with AI with debate with Chet G bd and the results can be quite insightful. And another things that I have been using in my classes using peer review. So, and now with Chet G bd. Now we have up here. So we can do peer review with Chet G bd. So students can critique like the GPT writings, or like, they can let GPT to critique their work. So I think like there are multiple ways we can use Chet G bd as peer like for peer learning. So finally, technology is a tool, it is great, but it can be a disaster so watch out. Yeah, in one of my favorite quotes from Indiana Jones, choose wisely. So for Adrian as a student, how do you and other students use Chet G bd and other AIs right now. I just wanted to let you know that you know we're not holding anybody accountable for anything. Yes. I think, you know, you can separate, I'll name my peers use Chet G bd into two categories, productive ways or product ways that are productive to our education and not so productive ways so the not so productive ways I mean, you know it's being used to fully like write essays, just generating responses for online discussions, answering quizzes, and you know doing assignments. And you know, obviously that's not super effective. And it's hard to catch which is an issue to the productive ways is that I found it quite helpful and like proofreading. Stuff I write so emails for example I found it quite helpful. It's also really helpful doing research review and concepts learned in class classes of it's been very helpful for exam review for me personally. And also like as an assistant research if I'm trying to research a topic of found chat GPT to be very helpful and giving me an introduction to a basic overview. And from there I can, you know, do further research, but it gives me basically a start. Also, I'm a computer science students so I'm doing a fair amount of coding, I found it helpful in, you know, as the second pair of eyes basically for my code and it's been helpful at finding bugs. Well thank you Adrian, but one quick follow up questions I mean you use it other people use it and point different people use it at two different extent so have different levels of you know experience with it. Other people learn how to use it better. Well, do you feel that we should provide that kind of education. Yeah, I think. I think it would be helpful I think, you know, it's a new technology so it's going to continue to improve and be more reliable so I think learning how to use it just as we're taught how to do, you know, research in the library for example would be helpful. It's not. It's lovely to hear from a from the students perspective. So, next up, I want to, you know, can we provide some practical strategies for teaching with AI so this. Let me ask Laurie first, knowing some of the strengths and weaknesses of chat GPT what are some of the strategies that faculty can use. Okay, so I'm going to go ahead and share. I have a little brief background on learning services. And then I'll talk about some strategies and it'll be very brief. I know we're running low on time. So I'm part of a team in the library learning services or part of learning and outreach services, and we focus on digital and information literacy. So we're going to choose a general education class. I am 1110 which is introduction to information studies, and to describe the president nutshell, we had a past student, we called it internet class. We actually include algorithms and artificial intelligence as a part of our course content. This is an introductory thousand level course in an interdisciplinary field that's concerned with the impact of technology on society and helping students navigate systems of information. So we're really looking at this sort of real bird's eye view, and we're really focused on the ongoing at the time the impact of these developing technologies. We want our students to think about how they can use them effectively as we were just hearing about, and also ethically, while developing a critical eye for these things. And this is actually an infographic that I'm showing you. There's more information below. I'm just not showing it because I know you're all start reading it right away. So, in addition to teaching students about so I'll just scroll down. So here's our approach, a little bit about our class, it can go right past that I will, I will do a link to this later. Okay, so the typical thing that we're actually focused on emery services is the design of effective research assignments. So how to guide our students through a research process that looks a lot like learning and help them build those you know transferable research skills, sort of music in their careers as well as while they're in college. So based on our experience is working with students and faculty on research assignments. We have some thoughts about thinking about how we deploy something like chat BPP or how we think about designing assignments with it in the room. So, first of all, we think it's important to focus on the process of learning with assignments. It means building in more places for students to show the process, as they complete an assignment, perhaps even including reflective assignments with students document mistakes, they may say they learn and complete assignments. They may be having something maybe do it for them. We work closely with the English department and their English 1120 composition courses. We see a lot of well structured assignments with a lot of intermediate steps that lead to a final product, and give students a chance to communicate what they're working on as they go along. What I would recommend is when we think about the sort of day to day regular assignments we give students throughout their learning process, we think designing assignments that require personal reflection or active learning, where they apply the concepts that they are learning. So they won't be able to go to chat BPP and simply ask for a summary of something, paste that into an assignment prompt, or they might try that but it really won't work because it won't actually answer the assignment. So it's a little bit more mindful. And one of the things that I actually did in preparing for this was tried to do some of our assignments using chat BPP and that's actually fun. Well, I thought it was fun. And so thinking about how you might try to do it. Is it kind of an interesting exercise as a student and to enter a little bit test your assignments to see how well they're kind of working. And the final thing that I'm going to recommend and I'm actually going to scroll down for this is a sense of engaging with chat BPP. And this is something that Victor already referenced. But it's just making this a partner in teaching having students react to critique and edit the information it produces. You can then ask it for discussion props to debate students. There are a ton of ideas out there about using it and your teaching practice and helping encourage students to use it ethically. I have a few here that are covered from a librarian Ray pun, along with a link to a bunch more of these kinds of assignments and ideas. And so, there's more information and links on this infographic. And I will say, we've got some examples of assignments in use. And then finally, some additional recommendations with a link to a research side that we have that has a lot more readings on artificial intelligence and education. And I think that thus far I have recommended chat BPP for students to help them paraphrase from sources, because that's still that's pretty hard and it is helpful. And a student mentioned struggling to understand a research articles that they were trying to read in a technical field. And I suggested that they use it to help them with definitions. While they were reading which I think is something along the lines of what Adrian was talking about. Let me get the link to this and I will put it in the chat. Great. Thank you, Lori. And now come forward, Ian and jet. Tell us, you know, about the resources and approaches that can support faculty in teaching with a eyes. Yeah, definitely. Well, I can I can go first. I would definitely echo what what Lori said, a really important part of this is moving away from assessing outputs and towards looking at processes because even before AI there was always that kind of risk that those outputs could be created by someone else. And so assessments that really let you see the learning process occurring, you know, live oral in person time limited options, active learning flip classrooms, things like that. And it's important to also consider assessments based on things that I still can't do, for instance, chat GBT is not really up on recent advances. Current events, its data set only goes up to 2021 or so so it still thinks, you know, Elizabeth is the Queen of England, unless a human intervenes and kind of tells it that that's wrong, including personal or course elements things that that the AI can't possibly know and then yeah requiring sources and citations because as we've seen it'll hallucinate ones that don't exist. I concur with the panelists you know there's a really good opportunities for assessments that use AI assignments that use AI to kind of compare and contrast you know what a human rights what an AI rights, trying to break it and to get it to demonstrate its limitations are all really good opportunities for demonstrating critical information literacy. I think, and just taking a more big picture approach I think chat GBT also kind of presents an opportunity for us to think about our own philosophies and approaches regarding academic dishonesty, you know, whether it's more, more punitive, more nurturing. And, you know, are we encouraging the learning behaviors and ethics that we want, or are we just kind of encouraging, you know, not getting caught. So I think really thinking about our desired outcomes behaviors and kind of working back from that with all the different approaches, you know, trying to use AI trying not to use AI can be can be really helpful. And in terms of resources I know at the end will share a really good live guide that the librarians have put together that has lots of concrete links and resources for people to consult. Thanks, and I also would like to share the resources that I put together in a chat right here. I want to take a step back and, you know, talk about all of us with all of us educators like why students sheet. Some cases, I can make this dishonest they are committed intentionally. However, I can make honesty in some cases reflects other problems in the facing. I want to think about that as well. For example, some students, I can make this honest, they were just committed because of their poor management time management, procrastination or disorganization. So, I want to recommend that send these students to UNMG or see graduate resource center or caps to help them manage that time to avoid coming dishonesty. So that for some students lack of confidence in their own ability to learn the subject and positive test lead them to violate academic integrity rules. So we have to think about what can we do to help them feel that they're confident and succeed in their learning in the course. And for some students, seeing the reward gain from sheeting is worth taking the risk. For example, getting the good grade to graduate obtain funding or get praises from their friends and their parents right. So, thinking about what Laurie and Victor talked about what also what we can do with tech GPT and what it is good at. It's good at producing answer and help students learn over order thinking skill. Right. For educator what we can do is design learning activity and assignment that promote higher order thinking skill or hot acronym hot, which include critical thinking and creative thinking skill and the link that I put in the slide. By Jessica Manbuk using technology to develop students critical thinking skill could be a really, really useful tool for you to design such tasks. Also, for educator try to get away from teaching skill mastery and focus on project experience process. As I like project based assignment, use authentic assignment to incorporate students fund of knowledge in terms of assessment check GPT would know about students individual experience right and since maybe are likely to be cheating in high stake. So break it up, make it into like smaller quizzes or exam. One of the link they share include promoting economic integrity. It provides suggestion to instructor, for example, to explain the relevance of the course to students goal and show confidence in your students ability to succeed in your course. Probably, you have to try to think about if, if you integrate students assignment with information generated by AI. How would you and students site that information. So far, APA MLA Chicago scholars have not decided on what would be the standard citation for this information yet, but the link that I provided somehow some universities provide guideline for the students how to reference them, but keep checking back with those sources for the more updated version. I just want to leave you with this also, it is important for us educated to learn about. I think your audio just got cut out. Not really. Let me try this, maybe you can hear me this way. Yes. What motivates them, what their goals are, how your course and help them achieve that goal. So defining from my current dissertation study on teacher feedback review that students perform well, and maintain academic integrity when they receive meaningful instruction and feedback and emotional support from the instructors. This is in my opinion, one thing that AI cannot do so far. Thank you so much Jack and Ian. So this question last questions for everybody really. It's about the ethical considerations. So what are some of the ethical considerations surrounding the use of AI education like privacy concerns on the potential buyers. So anybody would like to jump in. Yeah, yeah, sure, sure. I think one big thing I'll be I'll be looking at that I'm interested in is just kind of the labor considerations you know there are there are known issues with kind of intellectual property attribution, lack of transparency about where these models get their information from whose work they're using. And then also the content moderation aspect you know we kind of had this divide I was reading article in the Guardian about kind of the differences we open AI the company and actually open AI. So sort of sort of that that access to who gets to use AI, do you have to pay for it, and kind of the pros and cons of having it reside with a company who can, you know, kind of put a little bit of a wall around things and intervene if the model starts spitting out bias or inaccurate behavior. It's a great thing but then you also see cases like I believe it was open AI that was using moderators in Kenya who are being paid $2 an hour, right. So you can trust that with truly open models there was a recent leak of I think Metas AI model. And so, you know, kind of that could be a good thing for open source information but then it could also be a bad thing for bad actors kind of using that with a little to know oversight so I think that's kind of kind of interesting to look at going forward. Yeah. I think for the computer science side one of the things we've been looking at is a lot of the easy programs we give to students, or we're already out there on the web, but they were pretty easy to search and know that they had copied direct code and. And so what is the boundary between copying code on the web versus copy using chapter to PT to answer your questions for you and what kind of contributions to students have to really make in order to make something their own when when those resources were in some ways already available. So yeah this is something we've been struggling on with and I don't think we have great answers yet but hopefully we will those ethical questions still keep surfacing. And even coming from the library in perspective is that you know in the past we use Google or other search engine, we see a list, and then we can kind of look at them review them evaluate them. And now the algorithm would pick up whatever is the best answer, even like the chat chip, how Bing will pick up several, but how do they pick those out I'm really not sure so, and that's something that I've been struggling with it how do we make sure that, you know brings out appropriate, you know, on certain some. Okay. Oh, Laurie. Are we are we wrapping up I was just going to say go ahead we have. I'm a little bit concerned about the Internet already started hiding the way information was created and how much effort it took and how those processes work from some people. And I feel like this takes it even a step further, and I do worry about students understanding of exactly why and how our information gets created and disseminated, and about what that actually means for the future we've seen how journalism collapse, you know, after the Internet and how the whole thing is shrunk. And I do wonder what that looks like for the future with journalism, and just with student understanding of like where their information comes from and who's doing this. And it makes me, it just makes me concerned because I don't, if you can't force somebody to know that and this very much hides that. Anyone else have any perspectives on this or what you what concerns you. From a learning scientist perspective, I think we need to get back to the, like the purpose of education like what are we actually doing. And I skip a lot of slide talking about typewriter is a great technology at the time. Yeah so in in high school or someone even in the college. Yeah, we have typewriting classes. But after we're processing like those are not needed anymore so I think like we our education needs to move forward with the technologies that are available to us to think about what are we act like what's the students actually need that what are the skills and knowledge they need to enter the workforce so that they can be a productive components in the society. So we need to revisit like our syllabus and our curriculum. Yeah, in order to know like it's, it's coding like it's the basic coding hello world is important. Yeah, if it is important like in in what sense and how I'll be able to help our students from like a very beginning like programmer like who only knows the hello world to all the way like to do the AI. So like this is something that we need to as an educator we need to spend more time to think about it. Okay, so thank you very much for all your perspectives and your answers to my questions, I just want to share my screen again. So we're thinking about, you know, next steps, you know what's next for after this. I believe this is a great, you know, initial conversation but there's, you know, just looking at some of the questions will be answered in the Q&A so you can take a look at those as well. There are a lot of questions and there are a lot of things we need to think about. And we're hoping that we will have more of these discussions and then decide on, you know, some of the actionable things that we can do. So you will receive everybody who registered for this will receive a post panel survey just very very short just to ask you, what would you be interested in learning more about in the future so in plans, you know, plan for those. And Laurie is already showing you, showing you the lip guide that we have now and we hope and hopefully we'll keep adding more resources to it. So please check that out. And I just want to thank you to our panelists for sharing your insights and expertise on the impact of AI in education and your contributions to this discussion has been really invaluable. So thank you again for spending your time with us today. And thank you for the audience for your questions and answers. We'll keep track of the Q&A actually which I compile them in and share them in another venue too. So thank you again. So take care and hopefully see you next time.