 Let me welcome you all to the Future Trends Forum. I'm delighted to see you all here today. We have an incredibly important topic with a great number of people with a lot of ideas and I'm really looking forward to our conversation. For a few weeks now the world has been convulsed with chat GPT. People have been trying it out, people have been asking it questions, you've been thinking about trying to figure out what it means. And the answers to what it means are all over the place. People think it may be the apocalypse for higher education that may kill writing as we know it. And allied to AI technologies and other fields like music and visuals, it may mean the end of human creativity. Another extreme says that it's actually not a big deal, that it's not very good, it's easily controllable and we can just basically absorb it into our normal educational activities. Others see all kinds of angles and challenges. What does copyright apply to it? What does it mean to write something like that? What does it do to the human voice? What are the many pedagogical responses? What are the institutional responses? What does this mean for Google search enterprise and more? Now last week we had a conversation about this that was a rip roaring hour, this nonstop question. We have questions left over from it in fact. We had a whole bunch of ideas and the topic is so good we're returning to it right now. Now what I'd like to do for the next hour of conversation is I'd like to invite you to share your comments and thoughts. And I'm going to bring up to the stage people who are invested in this, people who are expert in different parts of it, from computer science to media studies to writing pedagogy to economics and I'd like to have a conversation rolling about that. So if you'd like to join us on stage, again just press the raised hand button, easy to bring you right up. If you have questions or comments, please just use the Q&A box and we're happy to host you. In fact, before I can say anything, already hands have gone up and questions have gone up and I have to follow a long tradition in the future transform which is to bring up on stage somebody who has a good beard. So without any hesitation, hello Barry Birkett. Hey guys, how's everyone doing this morning? Or it gets afternoon now? Well, if it's morning you must be in the Midwest or the West somewhere. Right outside of Cincinnati, Ohio. I'm on the Kentucky side of the Ohio River. Okay, are you being scoured by wind? Are you being flensed by snow and ice? Well, we've been getting some more gray weather this morning. It's supposed to be wet tonight and then it's supposed to be freezing overnight into those negative digits we've all been seeing inside town. News reports recently. Load up on supplies and be safe. Absolutely, thank you, absolutely. So what do you think Barry? What's your take on chat GPT? You have an unusual perspective. Yeah, so I think the chat GPT, so everybody, my name's Barry. I'm the CEO and co-founder of a startup called Sakane. Our first product we are working to stop ghost writing inside of academia. And so that's my perception of it. My perception of it is that chat GPT is no different or can be no different than somebody using an SA mill. It's just, we've just used AI to help stop that. So that is my central point of view. And so in that, I think that AI can be doing two things, right? Similar to what people were saying last week, it can be like a calculator that can really help disrupt a student's education when learning their math facts and deploying your math facts in the class. But it can also be one of these things that can take the first pass at something and help somebody understand their argument better that they can then improve upon as they want to be improving upon their writing. And I think that comes down to the creativity of the administration and the instructor and to be developing that relationship, right, with the students. Because one of the central arguments of catching ghost writers is that, is that you need to be looking at how students communicate with you anyways. And then compare that to what they're submitting and see how those two things work together. Interesting point. So for you, you're the opposite of the apocalyptic school of thought. You're saying this is something that we already know how to control and handle pre-effectively. I think, well, so research, right, so again, my bend is in academic integrity. My bend is in stopping ghost writing, right? A lot of research has been saying that the best thing to do is to be interviewing your students, right, for your instructors to be cross-questioning a student. I forget who the gentleman was on that you had last week who taught writing to his classes. He's like, in theory, everybody has a class of 15 students. If I had a class of 15 students, I could teach everybody in that class how to be writing, it'd be fine. But I've not had that. My lowest class has been 65 students. Whoa. Well, that was one of the panelists that you had last week. That's a child of God thing. Yeah, right, so, I mean, that was the lowest class he had and upwards of 125. If you have one instructor, even one instructor with a TA population able to assist, that becomes a very burdensome task to be cross-questioning each student about what they wrote before you even get integrating their writing ability and talking about their semantics and stuff. And so that was the lift that we saw. We wanted to follow that line of research saying, let's get more cross-questioning and talking with our students more. We felt that that was more appropriate with that academic honesty and integrity statement like the trust that we try and give, like you, the student, are going to give me your best and me, the scholar, I'm going to give you my best. We wanted that collaborative relationship. And we felt that things like style of metrics, well, we've seen that style of metrics are not pinning out that use of judging metadata. It's not been, I think we're continuing to see iterations on that. And then there are also, there's some really cool technologies that are being developed where people are able to use add-ons inside of Microsoft Word to be able to see when students work, how they work, where they're working from, that kind of stuff, which blends the dynamic between, okay, I don't have a camera in front of you, but it's still a little bit big brother of saying, what is this your IP address? How long have you been working on this? So partly it's a technological answer, but it's also partly about improving the relationship between instructor and students. 100%, yes, that would be my point of view. Barry, can I keep you on stage for a few minutes? Sure. Next we have a whole hoard of people, and I want to join, and those who are not white guys or beards, I think are fully welcome. So let me make sure we get as many of them as possible. Oops, hang on a second. So let me welcome Carolyn Coward, who has one of the best jobs in the United States, a librarian at JPL. Hello, Carolyn. How's my sound, Brian? It's pretty good. Your video looks a little choppy, but we can see you and we can hear you. Well, such is my life lately. Thank you for inviting me up. I really appreciate it. I would like to add to this conversation. I'd like to dial this back a couple of layers, and I apologize. I was not on last week, but I'd like to talk about the AI systems themselves. I'm involved with the NASA effort to establish guidelines, practice, and eventually policy around ethical and responsible AI. And what that means to NASA is we delineate ethical versus responsible. Ethical has to do with the actual wrenching under the hood, the actual systems development. Is there inherent bias in the system? Is there junk code or programming? We want to make sure that the systems we develop are the highest possible quality with the least amount of bias so that as the AI starts learning from itself, it doesn't drive itself off the rails. So that's ethical AI. Responsible AI is a much broader concept. When you take a look at the structure, the people involved, the organization, how ethical, responsible, equitable, inclusive, how all of those factors for who is building the system. So responsible AI takes a much more global worldview about artificial intelligence. Oh, that was weird. I had some strange sounds in my ear. There was just a brief silence. I'm not a robot. I promise. Well, we'll check today anyway. Yeah, that means we see anyway. So we're at NASA, we're working on both ethical and responsible AI. And when chat GPT came up, I have to admit, so I've been a librarian for 30 years, I've seen all the technological developments from the card catalog to OPAX to databases to, we all remember, ZipDiscs to what we have. Yeah, the beginnings of the internet, basic HTML, Web 2.0, Web 3.0. So I've seen all the changes and I love all the changes. But the fact is I'm actually a bit of a leadite. I'm a bit of like a skeptic. I'm looking at AI with a real kind of a jaundiced eye and a wait and see attitude. Heck, we just bought a new microwave oven two weeks ago and I got nervous because it didn't have a keypad on it. It's got this slidey thing and I'm still kind of getting used to that. So I look at technology in general as, OK, let's wait and see. Let's see if this is good for the library environment with connecting information with the people that need it. I also spent 20 years in higher education. So I have a lot of experience with a lot of people are familiar with HIPPS, high impact educational practices. So I'm listening to the chat GPT conversation here and I'm thinking there needs to be an ethical foundation to all of the systems that we work with. And I haven't looked into chat GPT that deeply. I'm looking at the conversation around it and I'm listening to the dialogue. I'm listening to the concern. I'm listening to the concern specifically about how will students use it? How can faculty use it? How can administrators use it in general? And I'm thinking, OK, let's do a deep dive into the company. Let's do a deep dive into the system. Have they opened their system for examination or is it a black box, a proprietary system? So these are all questions I'm kind of forming in my mind. Also, the whole will students use this to plagiarize papers and not do the work? Of course they will. Of course they will. They're the traditional age, 18 to 24 years old. They're busy. They don't they see it taking classes in general as a means to an end. They just want the grade they want to get out. They don't most of them and I'm making a big generalization. Most of them don't see the nuanced benefit to a broadly based undergraduate education as we do who are oldsters and have been through the mill. Even graduate students, they're working full time. If they're GAs, they're not making a whole lot of money. So yeah, absolutely, we need to pay attention to this. But high high impact practices, smaller classes, as Barry mentioned, we need to kind of pair this down. And if there's any way we can connect with our students one on one or our small group on one and make that kind of impact, I think that will have an effect on showing the students the benefit of doing their own research. So I'm here to advocate for ethical AI, responsible AI and to really kind of, I mean, I'm not in the dive in. Yes, this is great. I'm not in the the apocalyptic. I'm kind of let's see how this turns out. Is it my concern though? I'll just and I'll wrap it up because I know there's other people want to speak. My one concern is that the general public is cottoning to this and feeding all kinds of personal information into this one system. And with any kind of fad, any kind of trend online, social media, that, you know, the word spreads like wildfire, people really aren't thinking about, how are they using my data on the other side? Are they am I being monetized? Well, of course, we're being monetized. You know, come on. But how else are they using my data and is that an area of concern? So I have not even tried chat GPT because I don't want them to have access to my data, even typing in a question, chat GPT. What do you what, you know, what color is the sky? Is it blue? Of course, it's blue. Well, that that establishes a connection between me and the system. So these are all questions I think need to be posed to the chat GPT company, but in for higher education as a whole. End of rant. Thank you, Brian. That's not a rant, Caroline. That's a whole series of great points. Thank you. Thank you for sharing them. We're, by the way, going or aiming to hold a session about AI and libraries within a month or two. All right, let me let me keep you both on stage here. And I want to add another person. This is my friend and colleague, Leigh Scalarrape set. And let me see if if I can beam her in. Hello, Leigh. Hey, how are you doing? Good. Good to see you. Are you at home? Yeah, I'm in my dark basement. Excellent. Excellent. Well, it's good to see you. Good to see you. Stay warm there. Yeah, actually, it's nice and warm. It's raining. I'll stay warm and dry then. How is how what are your thoughts about chat GPT now? What are you thinking? Oh, by the way, before you say anything in the chat, there's a link to Leigh's, a tarot list or the tarot page, which has a ton of links there. So I commend you to that. And thank you. Yeah. And feel free, as I said, it's a public group. And so you can join the group. You can add resources to it. Had a lot of, as I said, D.H. people add some contextualization around conversations we've been having around the digital humanness have been having around AI and AI written texts forever decade, if not more, that we've that that that that's going on. But I think and I want to echo and give credit also. I don't know if Karen's here today, but Karen Costas was bringing up some of these issues in the chat last week that I think are really important around, you know, unintended consequences, but also ways that it can help traditionally marginalized communities, but also how might it be used to target? I'm thinking particularly of second language learners where it can help them produce more fluent texts in English where they may understand the concepts. They may be able to know them very well, but they are typically penalized for writing or even speaking in a non-standard English and even English speakers from different regions and different and from different cultural backgrounds have traditionally been penalized when we talk about quote unquote standard English and what should be written in what is considered university. And so this could be a tool to really help these students communicate in a more quote unquote acceptable level of English where they are communicating their knowledge in a way that is approved, but in the same time that could be a way of penalizing them and saying, I know you can't write English this well because English is not your first language. And we've seen that we've seen that generally. We've seen, you know, we've seen that kind of accusation before chat GPT where certain international students or students of different cultural backgrounds, English second language students have been targeted and accused of cheating or paying essay mills because there's no way your English could be that good. So it can be it could be both a tool to really be able to help and overcome some of these biases that the students have no control over, but could also be a way of reinforcing those biases. Another another thing to think about is the disability community. Those who are neurodivergent, those who have other kinds of disabilities, like how could we, you know, I think caught or lost in the moral panic is again, not not just how these tools could be useful to general student population, but to other, as I said, to traditionally marginalize student populations, such as the among others, the disability community. So how could this start? So like I have ADHD and one of the things about ADHD is being unable to start, right? And so could chat GPT be a good tool to help those people with ADHD who have trouble, you know, you say write a paper and they need steps, right? You need steps to be able to do that. And not often or not always are those prompts given the steps. And so you could turn to something like chat GPT to get you started and you eventually write the whole paper, but you just need that little bit of extra help that you can't quite articulate. And that again, that's just an example in this particular case. But, you know, there's there are ways that this tool can bring a tremendous amount of good, you know, and I agree with the whole privacy and all of that because that could also mean tremendous harm to typically historically surveilled over surveilled populations. But, you know, if we can find a way to do it ethically, could this tool be used as a good for, again, not just for all students, but particular students who have typically been marginalized and or struggled within the higher education setting? Good observations, really good observations, Lee. Thank you. Thank you so much. I really like your balanced approach of this targeting people for good or for ill. I'd like to just quickly bring in one question that was raised. Let's see if three of you want to touch on this. This is from our friend Hussain Hamam, who's out in Beirut, and he couldn't make it live today, but he was really curious about the business case of chat GPT. And he was really curious to see, among other things, what it would do to the Google advertising market, as well as to advertising in general, thinking about what happens if you can simply direct chat GPT to write you some ad copy. I'm just wondering what some of your thoughts are on that. Yeah, please, Barry, go ahead. You're still muted. I'm sorry, I muted myself to be polite and then couldn't figure out how to unmute myself. It's okay. So as far as the use case for copy, like that's there, that's happening, that ship has passed. Right? If you follow, there's several Twitter feeds that people have been talking about. And over the summer there's this article that got written about a researcher in, I want to say Denmark, maybe Sweden, and she had a GPT write about itself and then post it to peer-reviewed journals, right? And with them as the first name and her as the second name and have permission of her PhD leader, and that got accepted into two. Right? With really no, like it got accepted, right? So I know I talked about two different things here, but I think that this idea of purchasing time in open AI, like I think that comes down to the AUP, which speaks to very much what Caroline, what you're talking about as far as like, you know, putting their information in and what do you get out, which some people have that issue with plagiarism chapters, right? Because you are giving over your intellectual property to have your information checked. You're giving over your intellectual property, even when you ask a question. And sometimes it's the question that's more potent than the answer, right? And so what we don't know, right? It's very similar to Facebook at the beginning. We all got on Facebook, oh, yeah, this is great. Let's talk. And we didn't know what metadata was. We didn't know what micro data was. We didn't know what happened. And Cambridge Analytica came out and we're like, oh, now I don't want to, you know, Caroline, I very much have that same thought that you do, like, where do I want to put my stuff in? But let me tell you, when you're starting a startup, you have to put yourself everywhere. So like my stuff, I try cleaning myself up and then it's like, man, I got to be here. All right. So you can't back out. You can't get out. Once you're in, you're everywhere. I've been using a hashtag on my social media, don't seed the AI beast. And the AI beast is insatiable. The AI beast may seem friendly, but it may be deadly. We don't, the thing is, we just don't know. And part of the difficulty and part of the reason we're having this conversation is that we're all kind of smart and a little bit skeptical. And we can kind of see around corners and we're kind of imagining a future, a rather dystopian future, unfortunately, where this stuff is, like I said, monetized without our consent or our knowledge used against us and formulating a world. As an information professional, I realized the change how people search for, consume, and digest information and share information from, let's say, 1995 to now. Back in 1995, people were more likely to consult print materials because the internet was still kind of in its infancy. They were more likely to trust experts. They were more likely to share information that they have vetted with their trusted experts. Now, in the era of this magic box, and I'm looking at a screen, it's kind of the great equalizer, but that's a double-edged sword. It's a great equalizer because everybody has information, access globally, but the quality of the information. We have access to all kinds of garbage information. This is the other concern. I mean, as educators, we're trying to guide our students to the best quality information using the best quality resources and showing them how to format and how to cite sources accurately. But if those sources themselves are questionable, but don't look questionable, the student is caught, is really caught in the middle. So how do we as educators guide our students to not only cite sources and do the whole sort of information literacy, the usual, but also to judge those sources, to track back where is the content coming from, but sometimes it's impossible to know where that content is coming from. That's another big concern of mine. Oh, that really is. And that's quite, I mean, the two of you have offered two. Hang on, well, secondly, just quickly, you've offered two very, very different responses. The, I mean, Barry, your response is, well, it's too late. The barn door is out. The horse is in the next county. We've got to cope with this. And Carolyn, yours is more prophylactic. You want us to not touch the thing. Not touch it, but play with it gently if I can extend the metaphor a little bit. Know what we're getting into. Know that there, well, that there are a lot of unknowns and have that be kind of okay, but realize that they're, you know, very polarizing. And like I said, I'm looking at this with a bit of a jaundiced eye, a bit of a skeptic stance, and really kind of hanging back to see where the conversation is going. We also, for the, for all the heat to die down on social media, people are like, oh, this is the best thing in some slight, let's just play with this and see, okay, you all play with it. Enjoy. Let's see where it goes. I'm going to give it three months. And then, then we'll, we can kind of have a more nuanced conversation about chat GPT. That's a long time. Good advice, good thoughts. Lee, please chime in. Well, so, so, I mean, yes. Oh, I'm sorry. Lee, please, you go ahead, go. The confluence there is that I also go by Lee in my daily knowledge. I feel like John Jacob Jingleheimer should get in this one, where Lee has my name too, so my apologies. Oh my gosh. What are the chances? AI driven? No, stop that. No, no, no. No, I think that there is, there is the nuance to this where, but I think, who was it? John Warner said it. And he said, if we teach writing like an algorithm, and then I sort of added the fact, the algorithm will always do it better, right? If we're writing ad copy and it's formulaic, the machine will do it better and more efficiently, right? I had it write and about me. And it was completely wrong and it's about me. But it followed the formula of what it about me should sound like, right? And so this is where, and it's probably, we likened it to a calculator, but I would almost go as far as saying, it's almost like a spreadsheet, right? Where we don't miss the fact that we don't do those calculations anymore, right? We set up this spreadsheet, it does it for us, right? I don't have to sit there with my grade book. Yeah, I mean, there's more transparency. We kind of understand, well, some people do, some people think it's auto magic. I hate spreadsheets, so I think that they're evil. But again, I don't have to manually calculate my grade book anymore. I am happy to pass that off to a machine. As somebody who writes and who writes a lot, and one of the reasons why people hire me is because I write a lot very well, very quickly and efficiently. But the machine will always do it better than me. No. In terms of things that follow a certain formula, right? And so then it is, what is the value add that we bring to it if we are creatives in the field of writing? And that's, I think that that goes for school, but I think there are things that are very formulaic, right? It'll get it mostly right. And I mean, our ads are served up algorithmically now anyways. I'm going to call Barry Lee, too. And I know what Lee, too, is about to say. When you say it will always do it better. And I heard him, I saw him go, yeah. But again, we've all seen those weird algorithm ads that it's just like it knows, again, it's learned from what your metadata has posted, so it's going to make a t-shirt about you being a swim coach mom, what actually also does ballet and Pilates, right? Like the mismatch to get your own sweatshirt. Or any of these other ads that are just like, take their name, take their this, take their that. It's like Mad Libs, right? If our advertising copy is already like Mad Libs anyways, well, then this more sophisticated version of Mad Libs is going to probably do it better, right? So in terms of, I think of its uses outside of higher education, as I think that cat's out of the bag. I think for us within higher education, in terms of the people who study it, with the people who are thinking about this, things like intellectual property, stuff like that, I think it's important for us to take that more cautious approach. But I also agree that I think outside of higher education where it's like, I could pay somebody to write copy or I could pay somebody less just to like scan over the copy chat GPT makes, you know, like capitalism is going to win out every time on that. And before Lee too jumps in, just to kind of summarize in Lee one, if I could summarize. If I'm looking for AI to help me find a recipe for broccoli cheese soup, I'm all over it. Yeah, just, you know, I put in how many people, what kind of cheese, how fresh is the broccoli, done. But I don't want my AI chastising me saying, oh, you're eating too much cheese this week. Wouldn't you just rather have broccoli soup? I'm like, no. So sorry. And but some people do, you know, some people, you know, like that's the nudging. Those are the micro, you know, the micro moves. They want nudging. And that's something in higher education too, right? Where we have the kind of automated reminders, the automated reach outs, the automated, and if chat GPT, I saw in this chat, somebody was talking about that, where would chat GPT be something good to help coach students? Right. And so like there's, there's the both of that. Sorry. Go, go ahead, other Lee. Yeah. So, so what Lee too has to say about that is, you know, from my background, so before I got into, before I got here, and I was an instructional designer at a university, but before that, several lives ago, I worked in adult education, helping adults get their GEDs and go to college or, you know, hit their goals, right? And a big powerful thing inside that is what we call modeling. I think everybody here knows what modeling is, right? You show people what's there. I think that one of the powerful features that chat GPT has is that you're able to say, hey, I need this and it starts building it. And you're able to see how those things go together. And one of the things that can be interesting, from a writing 101 point of view is like, how do you construct these things? Why are the, why did, why did the AI choose these things as paragraphs, right? Now that says a learning tool. Now, Lee Prime, what I would say to your comment about it being, you know, formulaic, it's 100% formulaic. I'm not saying that it's good, right? But if you can have a, if you can have somebody who's witty, a copywriter, trying to get those one sentence statements and they're just, you know, they're having a riffing, but it's not quite getting where they're going. And they say, hey, AI, I want 50 statements. And then it starts seeing how the AI starts riffing with it. And they're able to take that and platform to the next space. It's no different than, what was it at free schools in Australia several years ago, maybe 10 years ago. There was a guy who, who, who led that course on cheating. And basically the students were not allowed to have an original thought in the course. So everything they had to do had to be beg, borrowed or steal, but then stolen, but then cited where it came from. You know, so I think that that, that that spurned a lot of creativity that students didn't realize what, what cheating was, but, but to go back, like I think that that AI can be that tool that can help unstick people. I think it can assist ESL writers. I think it can assist early learners. The question is like what the instructor wants. If the instructor wants a ghostwriter, right? There's nothing wrong with ghostwriting. Presidents have done it for eons, right? Nothing wrong with ghostwriting. The question is what do you want in your class? And I think, you know, last time we're talking about the growth that people have with the internal struggle that you have, right? This, this building inside of your scheme of what makes good writing of what you're trying to say. And it's that internal argument that helps you refine what you're trying to say. And that's the, that's the important part of writing. Right? And so that's the value that we're trying to get. So one of my questions is with AI taking this first pass, what will that evaluation then become? Do we still argue with the AI? Or do we look at it like a sixth grader with a calculator that the calculator is always right, even though it's garbage and garbage out? This is great. I, I love the original question and how you all wandered far away from it. The, but each of you hit really, really important points in the, in the chat there have been a whole bunch of responses, including the fact that you guys are emphasizing process rather than product, and as well as the, the important difference in human creativity and voice. And this is, this is key. I want to add one more person to the mix. This is someone who's coming to us from an extreme time zone compared to us, which is Brent Anders, who was at the American University of Armenia and was a great panelist last week. And let's see if Brent is up. Hello, sir. Hello. Yeah. So there's a couple of things. One to begin with, I think we, we really have to understand certain things. Like the fact that an AI system that can do a whole lot of what chat GPT is doing right now has been around. It's been around for at least two years. So our students have known this and they've been using it. So this isn't a fad that's going to go away. It's not a temporary thing. In fact, there's lots of rumors talking about, hey, within a few months, maybe even before the summertime, GPT-4 will come out. Now, GPT-4 has the potential to be anywhere from 100 to 500 times more powerful and more capable than what chat GPT is currently using. So let that sink in as far as the capabilities and what that actually means. So what's really interesting, though, is that when we use the system, yeah, for sure, it's formulaic. I ask it to create an essay about democracy. For sure, it's going to use basically five paragraphs and here you go. But then the coolness of it and the power of it is then I can tell chat GPT, hey, I like what you did there, but change this. Give me another version. Make it more emotional. Make it in this way. Make it the way Albert Einstein would say it. Make it the way this other person would say it and it'll modify it as many times as you want. So that brings me into this next part and this is what I'm struggling with because I recently thought about our last session with chat GPT. And then I thought about, well, what do we say right now as far as plagiarism within our policy at the university? And in there, it talks specifically about, oh, it's plagiarism if you're claiming someone else's, some other person's work as your own. Okay, well, there's a problem right there, right? Chat GPT or AI isn't a person. So basically, according to the policy, that's not plagiarism. But of course it is, right? Because it's someone, some other entity's work. But then that brings in this other question, right? So here is chat GPT. I, let's say I create a document. Here's my rough draft and my instructor looks at it and says, yeah, you need to do these things. Okay, so I take that my rough draft and I feed it into chat GPT. And then now I tell it, hey, give me feedback. And then I say, okay, implement that feedback. Change it so that it's more direct because my instructor told me that. So now it's creating these other iterative forms based off of my rough draft. But now is it still mine? Can I just claim me? Now I say, no, it's co-authored by chat GPT. Really, is it? Because what is Grammarly doing for us? Well, Grammarly does quite a bit. And if you get Grammarly Premium, which is using even more AI in it as well, it can paraphrase an entire sentence for you. So it's creating another version of a sentence for you. So now is that AI? Do I need to say, oh, Grammarly also helped me? So you see how it's really starting to get this gray area, starting to really grow as far as the ethical aspects of what we're saying with who's doing it and who has full ownership. So it becomes really an interesting part. But one more thing is that I really think that we have to realize that this is our new reality. And if we're not incorporating these tools and these capabilities into the classroom, we're going to be doing a disservice to the students. Because you go out into business, we're talking about copyright right now. Oh, yeah, that's all done with AI. And it's a factor of we still need to have people to look at that content and to say, oh, that's good or that's bad. But the reality is now, instead of saying, okay, you have a week to do this content, this copyright, no, you have a day to do it and you're going to need to do 20 of them. Whereas before, I'd give you a week. So now you have to use that AI. That's required of you. You have to have that skill. And many of the different 21st century skills that are being pushed are saying that, hey, if you want to be competitive in the job market, you need to know how to properly work with AI because that's just a new reality. I mean, it's business. Absolutely. And may I have a couple of comments? Brent, you're brilliant. That was amazing. That was just so cool. So I dropped a big theoretical, conceptual, universal question in the chat. Is chat GPT a tool or is it an entity? Because if we are citing chat GPT or any other system as a co-author or a primary author, for God's sake, it becomes an entity on its own. And then in this era of digital twins and legal rights, okay, if chat GPT becomes a primary author, are they entitled to compensation? Do we invite them to a conference to speak? You know, do we want to work? I also work a lot with open science and the various flavors of open science. And one emerging sort of a 2.0 of open science is open methods. And so open methodology talks about declaring how you're going to do an experiment, how you're going to do your science what your hypothesis is ahead of time before you publish. And part of that, and it occurs to me, that in the area of disclosure, we may see more and more higher ed entities and other organizations requiring that if you use something like chat GPT, you must disclose that you use this. I wrote this paper with the chat GPT tool system, whatever entity, so that people will know that there is that flavor or that support going on underneath. So the other thing I wanted to mention is that whether it's chat GPT or any other tool, the rate of change, you mentioned version four is coming out in just a couple months, the rate of change, the rate of update, the rate of development has increased exponentially. So things are changing faster on a technology level, faster than any of us can even comprehend, let alone keep up with. And so that just adds to the challenge of us sort of working in this space. And we continue to have conversations like this because the new version is going to come out and it's going to have a twist on things that we didn't even expect. So we also, this is another reason why I'm kind of the hang back and be skeptical crowd is it's, I don't, I can't formulate an opinion right now because in 12 weeks it's going to be completely different. So I'd rather just kind of take a breather and look at the bigger picture. Thank you, Brent. That was, I loved your comments. Yeah, no, I mean, I want to do that too. It's just it's going to change so quickly that like one of the big things that they're talking about for 2023 is this idea that everything is going to be super apps, right? So you're not going to have so many different apps. Everything is going to be put together. Even Google right now is talking about how, hey, all these different things I can go, you know, text to image, text to video, they're going to be incorporating all those things. So you're going to have one system, like a super chat GPT where you could tell it everything. Hey, make me a video about this. Make me images about this and incorporate text and put it all together in a website. I mean, there's so many different things that are going to be coming together that it's just going to be so phenomenal that, you know, and I really like what you were talking about as far as having to say, well, we use chat GPT for this. I think that that is something that in the interim will be there, but I foresee a future, a very near future where it's going to be, what do you mean? Why are you telling us that you did this? Of course you did it. Like you'd have to say, you'd have to say no, I didn't use it. Because it's already. Yeah, I mean. Exactly. Nobody here is writing that they're using the grammar. I see it being integrated with Microsoft Word as well. Like why wouldn't Microsoft Word have this in there already? If they want to be competitive. And they kind of do already. Yeah. Yeah. Oh, it's a better clippy. I see you're writing a letter. Thank you. Would you like some help? Thank you. It's a much better clippy that we all hated back in the day, but now might actually be useful. But I think that this whole idea as well, and this is, you know, so there's writing classes and we want to teach students how to write, but mostly how to communicate their thinking. And then I'm thinking of disciplinary classes where the essay has typically been the way that students have communicated their thinking. Now, there's a move away from that in terms of project-based learning, in terms of all different kinds of multimedia, sorts of thing. But one of the things I was thinking of is that I want to know what basically as an instructor, and maybe people will disagree, is that I want to know that you've learned the material. You've met the learning outcomes, which is typically, you know, whatever discipline-specific thing that they want to, that the students can demonstrate. They want students to demonstrate that you can do X, Y, or Z or understand or, you know, I'm really masquerading Bloom's taxonomy here, but give me a sec. So if chat GPT the way it is now, and maybe five will be better, and I won't have to do as much coaching, but like you said, at the moment, chat GPT takes a lot of coaching from you. It's like, that's only five paragraphs. I need you to expand that, that, you know, you just made that up. Get rid of that paragraph here. Read this article on the topic and learn about it. And if I'm coaching chat GPT through the process, and I decide as the student, this now is, you know, I have to have an understanding of the subject matter in order to coach chat GPT to write a good paper on that subject matter. So am I not meeting the learning outcome, which is to demonstrate knowledge? Right? And I think that that's, you know, I think that that's something that we don't think about either. Right? And it gets us thinking, or how are we assessing students? Why are we assessing students the way they do? Why is the essay particularly in humanistic and even social science disciplines the main way? Because again, maybe this, maybe this point is moot in two iterations because you don't have to coach it anymore. But at the moment, that coaching reflects a knowledge of the subject matter to be able to say to it, you know, that's wrong. This is right. You need to go deeper here, you know, get a better quote than that one. Yeah. Exactly. When we talk about information literacy, like which skill are we now we going? Is that I can discern and maybe I'm doing a better job than my, you know, than my classmates who wrote their own paper. Well, you bring up a great point because like, so I'm in another country right now, Armenia, and they're going through a major shift in the development of their educational process. Here they're still very much focused on students here because I have two children that, you know, one's in high school, one's in middle school. They're very much focused on, you have to memorize information and then be able to regurgitate. Right. That's exactly how the United States was maybe 30 years ago. But there's still that and they're still developing away from that because it doesn't make any sense. Like we don't need to memorize everything. We have Google for that. We have all these different. We what we need is critical thinking to be able to actually use it and then create content based off of that. But now it seems like we're moving beyond that. We don't have to create the content. We have to properly guide the formation of it and understand that it's being done in the right way. So there's this critical thinking that's still going to be super important. Barry, I'm interested in raising children. Yeah, exactly. So I'm interested, Barry, in what you have to say about this as far as the detection software incorporating that or looking at different aspects of that. Well, so one of the reasons why I started this startup is because our tool is working in a way that's different. I feel that plagers and detection tools that are on the market currently are antagonistic. I don't believe that they value the learner. What the learner has is that they're working in a way of a gotcha way, right? Even e-proctoring. So what we've done with our AI is different. My background is adult ed. So I think more of an androgosical process. And I want to know what that learners bring to the table. And so we're working on creating something that is more student-first, a concept that I learned early on in talking with some people at East England University in England. And what they were talking about is this idea of like, okay, sure, this is going to stop ghostwriting. And that's, but how does it benefit the student? And so we're working on that. But what we see as our beginning is that what we're asking is the student submits their paper like they would to a turn it in submission. But then our AI is reading their paper and asking them questions based off their writing style, your content, and their memory. So it's not about like what they've learned, but it's about how they wrote and their writing choices. And we feel that that is more accurate, a more accurate reflection of what we want to get to, right? Like, again, ghostwriting has happened for eons. People aren't worried about ghostwriting right now. They're not, I mean, like ghostwriting, any university has a ghostwriting issue. Lee Prime, would you say that you have instructors asking, you know, feeling like that ghostwriting is happening in their classroom? Yeah, I mean, I mean, they're worried about it. That's for sure. You know, and, and, you know, we have, we have our Honor Council, we have, I mean, cheating has been an issue at universities, you know, since time immemorial, right? Like, but I mean, but we've also changed our view of it because we've gotten a lot more sophisticated. I mean, I, I, you know, I'm not that old and so, but I can remember, you know, pre-internet people talking about how there was a filing cabinet in the frat houses that had everybody's papers. They haven't gone either. Yeah, now they're Dropbox folders, I'm sure. But like that they, you know, that it was, okay, you have this professor, let's find our brother who wrote in a paper for that class. Here it is, right? You know, this, this has been around before the internet. It's just, we now have created more sophisticated to use the term that John used last week or something. We've, we've got way more sophisticated cop shit. Yeah. Well, never addressing the actual underlying issues that has led, you know, it's easier to get the university to pay for cop shit than it is to change the pedagogy. And if I may, we're treating this as a whack-a-mole game. We keep chasing after, we've got to squash X technology. We've got to squash X plagiarism, whatever. We've got to squat, squash, squash, squash, instead of addressing the overarching issue of why students are doing this and what, what is feeding the motivation for them doing this? And do we need to maybe rethink some of our own policies and our own learning outcomes to really shape what's going on in the classroom so that, okay, we're, we can stop playing whack-a-mole and get to actual learning. Yeah. Correct. I mean, but, but that's also the goal of writing, right? I mean, like, I think all of us who are inside of this movement of the move away from stage on the stage, the guide on the side, you know, how many times we've been talking about rubrics, but then rubrics that are personalized. Say you're able to then, you know, go into the system and make it to this paper, right? And, but all of that that we've talked about, I mean, all of that is formulaic, isn't it? I mean, how many times do we see a rubric that is just, you know, four by four or five by four? It's all from novice to whatever, right? It's like, it's same, same, but different, right? So the question is, how do we make it more nuanced? And I think that that's where we're going. I do want to go back to a point, Brett, that you were talking about, is this idea of like, do we need to be asking permission to be quoting the AI? And I think that that shift is about the station as well, because plagiarism is not the same thing as ghostwriting. And that's, which comes back to my, my central argument, right? Again, looking at my tool, it's like a lot of schools have things in place for plagiarism, but a lot of schools do not have things in place for when students are working with essay mills, when students are using that $50 to their peer, when students are doing these other things. That's a lot more nuanced and a lot more difficult. And that's the issue that's happening here. Again, if the instructor wants it, it doesn't matter. But the question is, what are people using it and not getting the background information that's more relevant that the instructor is wanting them to take away? I hate to intervene here because you guys are just brilliant. This is great, but I'm conscious of time and that we only have four minutes left. I wanted to ask a couple of procedural things. First, folks in the chat, as well as folks who have asked questions in the chat, in the question box, do you mind if I copy these and analyze these and post them to my blog? Just let me know in the chat and I'll be, I'll be delighted to do that along with the recording. There are a bunch of questions that have been pouring in and I'm trying to pick one that I can ask you all that will actually try to sum things up. And that is, if you could, right now, if you could advise a college or university, what to do about this, either in Carolyn's memorable phrase to whack that mole or how to do this structurally, what would your advice be? And I'd like to give each of you about 45 seconds to give a whack at that. Mine will be super quick. Please, go ahead. Get rid of grades. So, ungrading. Okay, ungrading. Well, but even ungrading has graying at the end, right? With ungrading, the professor starts to put something into the grade book at the end. So that doesn't throw that problem. Good point. And so, and let's go back to Carolyn's observation about the overarching issue of what drives students to do this. So we'll just take away one of those drivers right there. Thank you, Lee. Lee, Brian. Carolyn. I'd like to go next. Something that I've seen actually quite helpful. I don't know if this will work with the Big Tier 1 universities with thousands and thousands of students, but to actually host a university town hall, not just for chat GPT, but when these sticky problems, these sticky issues that we're grappling with that directly affect students, it affects pedagogy, it affects classroom management, all of that, is to host a town hall, invite everybody, and invite, have an open dialogue about, okay, here's the latest issue, whether it's COVID or, you know, back on campus or chat GPT or the next thing. I'd like, I'm always a fan of hearing student voices about this. I wanted, I would want to hear their concerns. I would want to hear their potential solutions. And really take what they say seriously, and maybe incorporate that and make some changes, but at least have this conversation out in the open on an institutional level. I love this idea. I love both of these ideas, and I'd be glad to help host and support those. Thank you, Carolyn. So, from my response, I would love for us to be at that level, right? I also work as a director for Center for Teaching and Learning, and I would love for our instructors to be at that level of AI literacy to even realize that this is really going on, right? I can guarantee you that a majority of your students know about chat GPT, or that they've already been using some other of the many AI creation software that's out there to be creating writing already. So I would love to be able to push this and say, hey, this is a mandated thing that you need to know about, that you need to incorporate, that you need to start to push this understanding of AI literacy for both instructors and students, because this is the new reality. So pushing that, having that being incorporated in the overall process of doing things, so that the instructor knows that they can't just be thinking that, oh, this essay is going to be my assessment. Well, it doesn't really assess anything if they're just using a software to create it. So there has to be this more understanding of how are we truly gauging if the student is gaining all the student learning outcomes that they're supposed to within the course. Brilliant, thank you. Thank you. And last but not least, Barry slash Lee secondary. Yeah. So I mean, my response is that early on, the article that came out in higher ed, the idea was just that listen to your students. Have that community, right? We're trying to build community. If you are on such a scale that you, that community is not working for you, talking about these hard concepts with your students, then look at the tools that can help. I feel that my tool off plus is one of those shifts. And I would like to be able to share with anybody who's interested. We're an early stage startup. We're trying to work with teachers, trying to work with professors, anybody that wants to enable this in their class. Please feel free to reach out to me. The whole idea is that we are taking the cross questioning for you. And again, asking those students, asking those students writing specific questions, not content specific questions. I can be a little not lived. Barry, Carolyn, Brent, Lee, you were fantastic. Your facility is dream. I had to do almost nothing except reluctantly stop you all. Could you could you please in the chat, just to accept you interested, toss your contact info in the chat, including Barry, the link to your to your new firm. I have to wrap things up. I do that only with great regret. Thank you so much, Barry and Carolyn. I love how you each represented so many different points of view with so many different perspectives. This has been terrific. And I thank everybody for your thoughts. The chat, Wesson and I are recording the chat and the questions that we can post a month. Thank you all for another brilliant discussion of this amazingly challenging and interesting topic. The great deal of importance. We should return to this in the new year. In the meantime, if you'd like to keep talking about this, you'll note here that in my slide that I've added something. Please use the hashtag FTTE wherever you are on Twitter. I've selected there and also on Macedon. That's the best way to find me there, Macedon. And on my blog, we've had several posts there and more coming up too. If you'd like to go into our previous session as well as our earlier sessions about writing and about plagiarism and cheating and academic integrity and so on, just go to tinyurl.com slash FTF archive. You can go through those. We have a whole bunch of sessions coming up. Please go to forum that Future of Education to US to sign up for more. And if you'd like to share any of your work, including any of your adventures, the chat GPT, email me. I'd be glad to share them with the world. I'm very proud of all of you in this community. And I'm delighted, delighted to spread the word. In the meantime, sorry we went over a couple of minutes, but this was all good. This is all excellent. Thank you all for this. We're coming close to the end of the year. We have one more session next week, which will be very lighthearted. Between now and then, wherever you are, stay safe. Enjoy some downtime if you can. It's been a pleasure being with you all. Take care and we'll see you next time online. Bye-bye.