 Welcome to the Future Trends Forum. I'm delighted to see you here today. We've got a fantastic guest today on a crucial topic and I'm really looking forward to our conversation. We've been focused on artificial intelligence for several years here on the forum and ever since chat GPT took off last November, we've been making this one of our leading topics. We've had a whole series of sessions diving into AI from multiple points of view. Assessment, curriculum, pedagogy, the overall future of the labor market and we have a session coming up in a couple of weeks about open source AI. But today we're going to focus on a particular aspect which is how do we consider, apprehend and use AI in higher education with an eye towards justice and equity. Now we're doing this with the help of someone that I'm absolutely delighted to be able to host. Mahabali is a professor of practice at the American University in Cairo. She is one of the most energetic humans on the face of the earth doing tons of work with social media, to workshops, teaching presentations, to writing up a storm and I'm really really grateful that she could join us today. Now without any further ado, let me bring her up on stage coming to us from Mahabali. Are you in Cairo right now? I am in Cairo, Egypt. Hi everybody. Excellent. Welcome. Welcome. It's so good to see you. Thank you for joining us. Thanks for having me. Now Mahabali, you know our tradition in the forum is we tend to ask guests to speak about what they're going to be doing next. So I'm curious for the next year, you know the rest of 2023, early 2024, what are the big projects in mind for you and what are the big ideas that are uppermost in your mind? Right. So first of all, Salaamu Alaykum. Welcome everybody. Salaamu Alaykum just means peace be upon you and it works for day and night, so it's really good for different time zones like today. And can I just talk about what I was doing this week? Because this week was the first week since 2023 that I didn't talk about AI every day because I was doing a children's camp. So when you said about my energy, oh my god, whatever energy levels people thought I had, having to be with kids for five days in a row, co-facilitating a children's camp is another level of physical, mental, emotional energy. So that's what I'm coming out of, but I'm re-energized and I'm ready to talk about AI again after pausing for five days. Definitely a lot of AI in my future I think, you know, I still think we don't know what's happening with education. We're still learning. We've done so many workshops and community conversations in my institution about this and as part of Equity Unbound, which is a movement that I host and MyFest, which is the Mid-Year Festival, which is happening right now, and maybe one of the folks who's part of MyFest can put the link to that. We're doing a lot of AI sessions on that. I think there will still be people who wake up in September and think, oh wait a minute, I haven't planned my courses yet. Like we did a summer institute to help people rethink their assessments. People wake up in September and discover, oh I still need to know more and maybe there's going to be new AI coming up, so that's definitely in the horizons. I also facilitate our new faculty's one-year program and we've inserted more AI into that, but I also want to start to insert a lot more compassionate education, equity care type of things into that because that's also part of what I do and I want to insert a lot more choice into our faculty development program. So I'm working on that as well and probably in other kids camp next summer, but let's not talk about this just yet until later. Oh no, you'll need that. What age were the kids? So they were supposed to be 8 to 12, but they ended up being 6 to 12. Oh boy. So we had 6 to 8 and 11 to 12 with nobody in between, so we had to split them up for some activities. It worked out okay in the end. Oh wow, this is great. This is great. Well it's an impressive amount and you're also supporting faculty at AUC right? Yeah, I mean that's my main job right, so Centre for Learning and Teaching. My main job is to support other people with their teaching for sure and I also teach my own class on digital literacies and so AI is huge in that. Thinking of having my students do community-based learning by going into schools and teaching like middle school, public school students about AI and about some other things as well next semester. Oh that's great, that's great. Friends, if you're new to the forum, I'm going to ask our guest a couple of questions to get things rolling, but then I want to get out of the way so all of you can put forward your questions and comments. So again, if you're new to the technology or if you haven't used it for a while, remember in the bottom of the screen those buttons to press, you know the raised hand, the question mark, and the chat box. To begin with just researching this deeply for the past year, I found that most faculty, most staff, and many students, they tend to focus on AI in a very instrumental way. How do I use this in my class? What are the implications for cheating? What software should I use? But I rarely hear people talking about it in terms of equity, so to get people started, and again by academics I meet everybody, faculty, librarians, technologists, presidents, students, how should they start approaching this mentally? What are the major concepts about AI and equity and higher rate that we should bear in mind? All right, so I have a lot of things to say about that, and I took a lot of notes to help you today because I knew I was going to be in a kids camp and it was like not the right frame of mind, but before we do that, do you mind if I ask the audience a few questions to sort of see where people are at? Of course, of course. So I put a link to the Mentimeter in the chat, but I don't know how to share my screen to show the results. Well, you should be able just to mouse over your window and share my screen as one of the options. Well, I see it. I just have to hover over my own window. Okay, I see. So are people seeing... I don't know if the link is clickable. Friends, you should be able just to copy and paste that link over. Okay. All right, hopefully. So just before I start, and I'm seeing Shelby saying, someone who works with struggling writers, the equity angle is really intriguing to you. All right, I'm looking forward to talking about that. I think equity always comes to mind with everything, and then sometimes things keep happening in front of you that make you notice equity issues, but you need to sort of keep an eye out for it. But I do want to ask people, great, thanks Mark, the link works. Awesome. So let me start sharing my screen so I can see your answers to the first question. Very good. Which is, how do you feel about AI right now? And this is, you can have more than one feeling, by the way, you can answer more than once. So I'm seeing excited, curious, worried, intrigued, challenged, overwhelmed, anxious, frustrated, expectant, behind, confused. I think we're always behind. Like no matter how far ahead we are, we're pretty behind. Overwhelmed, again, interested, apprehensive. Skeptical of the hype, always should be skeptical of any kind of hype. Can't lump together. Amazed. Grammar not up to snuff. Oh, I've heard a lot of things about chat, but usually grammar is not one of the critiques I've heard of it. That's interesting. Tsunami, cautious, concerned, horrified, trepidation, philosophical implication, cautiously optimistic. Surprising income inequalities. I can see that happening. Catching up, amazed, transformative, neo-ladite, calm, creative, positive, thrilled. I wish this Mentimeter had sort of a sentiment analysis so that it would color the positive ones differently from the negative ones. We should give them that feedback. And maybe look at me a free account. That's how I get freestyle. Vague, obsolete, kishotic, oh, interesting. All right. Thank you all so much, uncertainties there. Okay, let's ask another one. Which metaphors of AI do you find useful? I started asking this question after my mom who knows nothing about it. She's a doctor and she doesn't work anymore, not in academia at all. She's a medical doctor, I mean. And she's like, you know, AI is like fast food. I was like, oh, that's such a good analogy. So I started asking people. So it's the Castic Parrot, obviously from this very well-known paper that's really helpful in explaining how it works, Teaching Assistant. By the way, I'm co-authoring a paper. If you haven't seen this on Twitter, Anna Smith, Anuj Gupta, and the Esther Temer, who is an undergraduate student, the four of us are writing a paper about AI's metaphors of AI. And it turns out that a lot of people have written papers on this. So it's very interesting. Not like spell check, shuffling machine, wartsmith, electric motor, calculator is a very common one, of course, math scientist assistant, mascot analogy. Ooh, interesting. Heatmap, avian identities, gym, brands, sloppy copy. These say a lot about our attitudes when we choose tarot cards. Interesting one. Newest sibling of Clippy. Yeah, that's funny how people are connecting to that. Black Mirror, loyal dog, 10,000 grad students in a, okay, you don't have no space probably, but like in one thing, collage, fire hose, co-thinker, adolescent, competitor. Wow. Mediocre average. Mediocre averages. Yeah, I can see that one. Stranger the door, mimicking human learning. Personal assistant is bigger now and calculator is big. Calculator gets used a lot. Problematic for many reasons, but useful in other ways. Bumbling Child, your new teacher, workhorse, unwanted dinner guest. Two truths and a lie. I like the two truths and a lie one. That's so cute. At least common denominator. That's nice. Somebody else likes past food, apparently. I don't know if they heard it from me or if it came to them separately. Monkeys writing. That's nice. Classic. Yeah, I like that one. All right. One last one. So one of the things I always assume is that I'm not the only person in the room who has thought about equity issues or about anything in general, right? Someone said brain and blender. That's also an interesting one. I'm sorry if I missed yours. It's really difficult to keep track of when there are so many. But this one. So what are some equity challenges that you've noticed or encountered related to AI? And this one's not a word cloud. And so if you guys start answering really quickly, I may not be able to keep track, but I'll try to pick up on some of what you have. And then I'll go back to my own notes, which I've organized in a very particular way. So the cost of GPT, right? In the beginning, it was free to everyone, but actually some people can pay to get better versions of it, right? Some seers know how to use them, some don't, of course, right? Enhancing the digital divide in some ways. Bias answers. Yes, we'll come back to that one. Incredible prohibitive, hardware and energy costs, right? Very bad for climate change. Technical barriers to open platforms. And even more than technical barriers, I'll talk about that. Access in terms of cost and time. Access to IT in general, right? Unequal access paid risk free tools again. So that's happening coming up a lot in the bias of the data. Wealthy seers will be able to avoid every version true. Can use effectively with prompt engineering or effectively, if you don't know how to do the prompt engineering or it's not good or the prompting for what you want is not straightforward. Ageism, giving career advice, does it do that? I didn't know that, but that, yeah, I can imagine because a lot of the internet is ageist, I think, in the world. So it just reflects a lot of that, right? Language and context, intentional meanings and misunderstandings. The must provide phone number to use. I'll come back to that one. That's really, it has a lot of other implications. People with means again, reproducing biases and training data. So some of these are recurring, right? Visually impaired students don't have an easy version. As far as I know, most people, the person I know who uses screen reader says most of them are accessible. I didn't, I don't know what might be the case elsewhere. Will this tech finally shift the paradigm away from evaluating students based on writing and toward UDL? And that would be a positive, I guess, in terms of equity, right? There's the human labor cost, the racism in it. There's an article here that made them think. If you can put that article in the chat, that would be easier for people to access, obviously. Faculty hesitancy isn't itself an equity issue. That's true. We'll come back to that one as well. Yeah. Assuming everyone is using it to cheat, that is too an equity issue, I agree. Again, people saying access to more advanced tools, question of linguistic homogeneity, right? Further marginalizing indigenous and other languages and other Englishes for those of us who teach in English. No one really knowing how it works. That's in itself also an issue, definitely. So some of these are recurring, and that's why I'm not reading them again, but definitely the digital divide questions coming up in the bias is coming up and they're just like not everyone's able to do the same amount and the racism. And so I will stop here. And the compromise privacy, thank you. Somebody mentioned that one. All right. That's a huge issue. Yeah. Now, how do I unshare my screen? Stop sharing. It's mouse over yourself and stop. It's right there. I saw it. There we go. Right on me. Thank you all. So now I can share my thoughts on this. All right. So one of the key things I think that happens when we talk about equity is that a lot of times we go financial and then we stop at financial, but actually inequity and oppression are multi-dimensional. And sometimes when we talk about financial, we stop at the cost of the tool and we don't recognize all the other financial implications beyond the cost of a tool. So for example, in terms of access, and of course there's some systemic issues and some that are less systemic, but economic issues, yes, definitely some tools are paid and some are not. And so obviously people who are able to pay for the more expensive tools will get better access and all that. But actually Egypt, Hong Kong, I think Saudi Arabia, there are a few countries that Chatchapiti was not available at all. Officially, it's not available in Egypt. You put in your phone number and it recognizes an Egyptian number and it recognizes your Egyptian IP and it won't give you access to Chatchapiti. But then there are people who have the digital literacies and the means and a friend in the US or whatever. Thank you, Nate Angel, whose numbers they allow you to use their numbers and you have VPN until you manage to get in. Now there are other ways to access it, but at the very beginning that's how I got access to it. And a lot of people, no matter how much I explained how VPN works, could not make that work. And so for some, and it's not my country that's blocking it, I think the company just decided these countries would not have access to Chatchapiti. Nobody knows why. Well, that's interesting. Yeah, it's very weird. And then of course, the digital literacies element that a lot of people mentioned here. And so, you know, when people said it wants your phone number, they may have thought it was a privacy issue, but I was thinking it actually knows where you are and so it won't give you access. But the digital literacy aspect is really important. Like who had the time in their life to learn these things? Who has the time to take away from other things in their life? Like, you know, adjunct and faculty have a lot of work responsibilities, students have a lot of responsibilities and who has the time to learn how to do this and the digital literacies to do it. I was a computer scientist originally. And so I hate computer science, but I know how. Yeah, that's another story. But I know how neural networks work because I designed one for my thesis. So I understand how AI works. And I've been teaching digital literacies for a long time and I learned new technologies quickly, but not everyone's like that. And again, a lot of us in the tech field know how to be critical of technology. So we're not scared of it. We're not sort of we don't take in the hype, but the hype gets to people and it affects them mentally and emotionally. And I think the way that drains faculty after they've just been through a pandemic. It was a huge shock that they had to rethink their pedagogy. And then now we're telling them, oh, you know, by the way, rethink your pedagogy again, rethink. Yes. Because education is not going to stay the same ever again. And a lot of what they could do during the pandemic, like open book, take home exams. Now that's a problem with AI and like they're like, oh my God, that in itself is usually problematic. Of course, there's, you know, Nancy Fraser talks about injustice in terms of economic, cultural and political, but those aren't the only dimensions. They're even more. But culturally, we know the data it's been trained on is biased. And there was another guest on your show who said, well, yes, because it's trained on the internet and the internet is biased. So of course, the data it's been trained on is biased. So what it gives us back is based on mostly a white Western male point of view. A lot of the times, it tries not to be offensive. But if I don't know if all of you know about the outsourcing of how it was training, it's data, how open AI trained it's, it's AIs to not be offensive and not to do like violent stuff is by asking humans to filter the data. And those humans outsource to humans in Africa and in Kenya, the workers got mental other than being underpaid, but they got mental health issues because of this. And the company didn't help them through that. And so there's a human cost mental health cost to trying to make this AI not be offensive to us by through that. And then the third element is political, like who gets to work on creating AI? Who trains it? Who decides about the training data? Who decides even what's offensive? Because there was one time that I was asking it to give me a feminist interpretation of the Quran. Wow. Oh, I don't want to offend Muslims. So I'm not going to do that. We asked someone else. And I thought it was okay ish, but that also makes assumptions about what is considered offensive in Islam and decides for me that this is not something I can ask it. And that sort of like shaming me for wanting morals and whose values are being taken to consideration with these things. So the other thing I was also thinking about is we think about, you know, the AI is the input, the process and the output. We don't know how it works. It's a black box. It's always going to be like that. We keep talking about, oh, can we create explainable AI? I know that's actually really, really difficult, but they could explain a little bit like about what they're giving it as data and how it's designed and can they reverse understand it in some way? But that's not something that tends to happen with machine learning and neural network type of AI. But for example, the input itself for things, not chat GPT specifically, but like the visual AI especially, we know there are copyright considerations here. There's exploitation of artists who did not mean for their data to be used this way. I always think the one that scares me the most is deepfakes. And I know they've existed for a while, like it's not new. But now there's it seems to be much easier to take videos of someone and create AI making that person say something different. And if you create and I'm someone who's very, very much an open educator and so a lot of my stuff is open. And so imagining that someone could take my video without my permission and create something with it and attribute it to me. And that's that's huge. And so and of course also like when we use it, how is the data that we're giving it being used to maybe trade it? What's what's going on here and who's being exploited in the process? And yeah, of course, the output other than being biased, and this people have said to me as well, and I can't keep up with this chat, by the way, I'll come back. That's quiet. You don't have to. I can usually keep up with the chat, but not today. Is other other than the biased output towards a particular way of writing and point of view, the way of writing thing for people who are non native speakers of English or just not speaking the English that Chatshik Ti considers correct or whatever. That's just sort of reproduces internalized oppression that they think their English is not good enough. Like, I think it might be too shit. Students see it and they're just amazed by how well it how well it writes. And they think it must be better than what they would write. So why would they edit what it writes? Because how could they even write better than that? And then they never learn that they can have their own voice. And to me, that's a huge, huge, huge issue. And then other things that happen when I talk to faculty about the detectors are not very accurate. And they talk about, you know, students who are smart enough to cheat smart and get away with it. And so students don't cheat smart and they don't get away with it. And who's going to miss out on learning because they actually think that Chatshik Ti is going to help them? And who's going to miss out on learning because a faculty member couldn't redesign their assessment to make it better? And the burnout that the professors are and teachers around the world are experiencing and the struggle that some of them are having to try to work with it. And of course, some disciplines, it's easier to say, oh, yeah, let them use AI and we'll work with it and we'll critique it and we'll integrate and it'll be helpful to them in the job market and other places. It's not like that. And it's a big struggle to teach language and writing right now. Yes. And the fact that's just a lot of teachers are feeling threatened, it's and I think it's unnecessary to feel threatened. I don't think we should feel threatened in education because teaching is so much more than tech people ever make it out to be. Why is that surprising too? I guess because I know so many faculty who are threatened. I know. And so many different dimensions, so I don't understand why it's not necessary for them to feel threatened. Yeah, I think that there's a lot of media hype and a lot of tech hype around technology every time something happens like MOOCs. Oh, I'm sorry, I misunderstood. I thought you meant faculty in general should not feel threatened in general. No, no, I mean they shouldn't be threatened by AI. Yeah, yeah. And the AI is going to get better. But what AI does is not the sum total of what learning is, it's not the sum total of what teaching is. So maybe parts of what we teach might have to change a little bit, but it's not, you know, learning is not writing, writing is part of learning. Definitely. I learn as I write and when I write, but the learning itself is something that we write about and that we write with. There's so much more, you know, and teaching is not about just giving feedback and grades on writing. Teaching is so much more than that. So I'm just saying, you know, but the fact that it mentally and emotionally affects people is important and we should care about how they're feeling and trying to help us all get through that. Mahad, that's a terrific amount of concerns and critique. This is, and you can see reflected in the chat where people are responding and celebrating this. I have a quick question for everybody and then a quick question for you. The quick question for everybody is this chat is very, very rich. Would anybody object if I saved it and posted it to my blog? I can strip out your names and make it anonymous. So please, in the chat, just let me know. Yay or nay. And while people are answering, Mahad, this is a very, very extensive, very powerful critique. I'd like to bring that to an individual professor or to an IT leader or to a chancellor and say, given all this, given this manifold catalog of problems with AI, how then should we respond? You know, what should a French professor, what should a biologist, how should an academic provost, if it is this terrible, should they block it immediately until it gets better or how else should they respond? Yeah. I never think blocking is a good idea unless it's, I mean, you know, sometimes they talk about the threat of AI in terms of severity levels and the severity is when you use AI in something like the criminal justice system where somebody could get, you know, get into really, really big trouble because of a mistaken AI. That's huge and I think they've stopped using that in the States but they were and it was very biased AI, right? So that's a very high threat level. Using it for like fun and whatever, that's very low level and the risk with education is somewhere in the middle, I think. The reason I say blocking it is not a good idea is that you can't actually block it completely. You can block it in your institution like there and that becomes kind of like redlining, right? Like you say some institutions don't have access and some do and some people at home have access and some don't. So that's not going to work. Yeah, exactly. And so what I always say and people then keep saying that this is vague but it's not because I have an entire blog post about this is critical AI literacy. People need to understand and then make their decisions. So for example, when I started to understand more about the issues myself, I stopped using it for fun. I use it when it's something necessary, when I need to use it for something specific, when I need to give a workshop to help people understand it. I think teachers need to understand it in order to understand if and how students are using it. I think people need to understand where it might be useful in their discipline and where it might be problematic in their discipline. So in my institution, because I'm at the American University in Cairo, we realize a lot of generic topics it will do well at, a lot of very unique contextual local topics, it will not do well at. But more importantly than all of that, it doesn't actually learn. So if they're actually learning something, I keep telling faculty, if they're learning something authentic, there is no way the chat GPT itself or any kind of AI will do that learning for them. But then if they end up using it to reflect, to help them write up something that they had already learned, then that's not a learning problem. But there are other equity issues in terms of like the learning, you know, the writing voice. And so a lot of the other issues are things we deal with outside of the existence of the AI, but blocking it would not serve that problem. But the critical AI literacy has to happen for the faculty, the teachers, and for the students. And when I say it happens for the students, sure, you could have one course where somebody teaches the students about critical AI literacy, like an information literacy or digital literacy scores. But in every course, what's the AI literacy needed for your course? Right? And I know a lot of teachers in my institution elsewhere are frustrated by institutions giving very vague guidelines as to how to go about the AI thing. But I think it's better to give vague guidelines than to give very restrictive guidelines at a time when we don't know the extent of everything. Give people freedom, because what it means for a 100 level writing course is completely different than what it means for a 400 level engineering course. Quick question about that then. Are you seeing professional societies aimed at individual disciplines, you know, chemistry, history? You see things stepping up here, giving guidelines for faculty in their field? I just saw a guideline coming out of the MLA and the four sites. So those are writing people, right? I haven't read it thoroughly, but I know that NMLs contributed to it. And I have the link to it somewhere on me right now. If someone else knows it, please add that in the chat. So I thought that was really good. So by writing teachers for writing teachers, those are the most important group in terms of they can't ignore it. They can't say, oh, we have a lot of ostriches in the sand type. I'm not going to talk about it. I've had people say, I'm not going to talk to my students about AI because I don't want to give them ideas. I'm like, they already have ideas, so you need to tell them what you think about it. Well, first of all, thank you. That's a great answer to my question. And friends, I promised I would get out of the way and I'm here by doing so. And we already have a stack of questions lined up, and I want to make sure people get a chance to ask them. So let me just bring these all up. And this is one from our friend, Kate Herzog. And let me just make sure we all see this clearly. And Kate asks, what about the need to teach prompt engineering? Don't we need to become proficient to teach this skill? Yeah, but there are a lot of free courses on it. And so I don't teach it. I give them the free courses and I tell them, take a look at the LinkedIn course, take a look at this other course. Actually, what I did this semester is I used to have a Choose Your Digital Literacy Pathways course where they decide what they want to learn about and how they want to learn it. And I said, if you want to focus it on AI, here are some prompt engineering courses and here are some different AI tools. Learn the prompt engineering, try it on the different tools and see what it looks like and give me a report back to me. So I didn't actually go through the courses they did and they gave me feedback on the courses. So that's what I'm doing in mind. Just send them links to the courses. Just find a good one, I guess. Any particular ones to recommend? There's a couple in LinkedIn. I haven't done them myself. My students say they're okay, but they're like not phenomenal. There's a book by someone that's all about AI prompts, but they're all similar. There's Ethan Molex blog and there's your blog. Indeed. It has some great ideas. And yeah, I would say Ethan Molex blog is a good one for that. Very good. Very good. Kate, thank you for the great question. Friends, if you're new to the forum, that's an example of a text question. So if you'd like to follow up and ask similar questions, just use the Q&A box down the bottom of the screen. We also have a question from our dear friend, Sarah Sangregorio, coming to us from New Jersey. And Sarah asks, a lot of discussions, including in multiple instances of this forum, go back to the very idea of assessment. Do you have any tips for moving the needle on strategies like authentic assessment? I mean, we're trying. It's a lot of work for faculty to make that mind shift if they haven't already been somewhere along the spectrum of authenticity. But I do think that's the main thing because if all the people have been assessing, honestly, a lot of times the kinds of prompts that AI does well are also the kinds of prompts that paper mills were doing well or the kinds of prompts easy generally to copy paste from the internet. They're all, you know, if people are concerned about that, they're the same problems. It's not like it's just that that's one more way of doing it. And so it's really because they're not really showing a learning that's really unique, right? So and honestly, even for us, and we know what happens when a professor tells me, can I use AI to give feedback to my students? And sure you could, but is their writing so boring to you that you don't even want to read it yourself? And there are ways to give feedback on some things, but you still read it. So I know that I'm not saying that's always going to be the case. But what I often want to say is that maybe what you're assigning them to write is not a good prompt because you don't want to read it. It should be something that you want to read. It should be something you want to engage with. It should be something the student wants to talk to you about beyond writing it. I mean, learning is a lot more than this. So but moving the needle. I think there are two things that are really important that that teachers are given all the support they need in order to be able to do that kind of thing, which sometimes means we need to give them teaching assistance in smaller classes. A lot of times the problem in higher education is giving people more classes or larger classes and then asking them to produce quality assessments and quality students. And then and then you say, let's use technology to replace that. You're trying to find a technological solution to a problem that you created that actually has a human solution that we're not seeking. And so we need to think, is there a human solution that might be better? Then let's just go with that. In the chat, your fan, George Station says, dangerous talk. And also to an early point, Doyle Friskney recommends Vanderbilt's prompt engineering online class. That is free. Okay, thank you for that one. I hope you put the link and that we're going to get that later. Thank you. So we also, let me give you an example of the video question. And this is our good friend also at Vanderbilt, Derek Ruff. Let me bring him up on stage. Hello, Derek. Hi. I should say University of Mississippi. Excuse me. Formerly a Vanderbilt. But yeah, that prompt engineering course is by Jules White, who I know from Vanderbilt. And I have signed up but haven't taken it. So I'm a classic Coursera student. Is it free? Yeah, it's free. Yeah, free to take. You know, it's the usual Coursera model. So one of the questions I asked in the chat a while back, and I don't know that there's a good answer to this, but I keep asking it in case someone has an idea. So back on the topic of AI detecting tools. So if I'm teaching a class and I have a student submit an assignment and I run it through one of these AI detection tools and it tells me there's like a 90% chance the student, you know, this work is AI generated, I don't know how any kind of academic integrity process at any campus I'm familiar with would be able to work with that. Like, what do you do at that point? Like have a series of awkward hard conversations with your students and have them grilled by the academic integrity investigators? Like it's not evidence, really. It's just not much more than a hunch. And so I'm wondering, I don't know, is there a way to use our academic integrity process here? Or is that just kind of? Yeah, so I'll tell you what I did. So I tested all the AI detector tools all the time. And then when Turnitins came in, I tested that one extensively because I knew that Turnitin has a reputation. So the faculty will trust it right away and it's right there. And yeah, it was forced on us anyway. We didn't realize we could say no. Anyway, so when, so everybody knows they're not good, but what I remember really well is this very particular thing. Is that first of all, if it's 100% AI generated and you just do a little bit of paraphrasing, you can go down to like 5% sometimes. If it's 100% AI generated and you put it through Quillbot, it gives you zero. Students have already been using Quillbot. What is Quillbot? It's 100% AI. You don't know Quillbot? I don't know Quillbot. Oh my God. So Derek Ruff doesn't know Quillbot. Everybody knows that you won't. I can't know everything. It's, Quillbot is a paraphrasing tool. Quillbot is a paraphrasing tool that students have been using for a while, even in school. So my daughter is 11 and she knows Quillbot. I need to talk to more 11-year-old obviously. Yeah, you know, your kids are in the wrong age maybe. But anyway, but students have been using Quillbot so often that when I tell my students to disclose transparently how they've used AI, they skip the Quillbot stage. They don't even think about it. I tell them, I tell them, you write ultimately in your regular essays. They're like, sure I do. I'm like, what? You write ultimately? And they say, yeah, Quillbot always does that for me. I'm like, you know, Quillbot is AI. Why are you not? Oh, she's like, oh, it's always open. It's like, it's almost transparent to them, like spell check or order correction. Sure. I wouldn't cite my use of spell check. I probably wouldn't cite my use of Grammarly, right? And this is kind of one step beyond that. Yeah. And apparently, Grammarly is going beyond what Grammarly sounded like it was supposed to be doing. So that's another thing. So anyway, so this is a big one. It's 100% AI generated. Put it through Quillbot, 0%. So of course, what I tell faculty is. Wait, so the text is generated by like chat GPT, and then you give it to Quillbot, a different AI tool to paraphrase it. And then it goes down to 0% chance of being AI generated, even though it was doubly AI generated. Yes. I'm glad we had this conversation. This is a tip for everybody. Yes. We are now in crazy land. I should have published that, right? Anyway, but anyway, so there's that. So what I always tell faculty, and here's what faculty tell me back, right? So a lot of times, whether or not, so our academic integrity committee understand this. And so they know that it's not enough evidence. And for me, the biggest problem is not that the 90%, the 90% probably they have used AI to some extent, whether it's 90% or 50 or 100, whatever. The problem is that there's someone else in the same class who's getting a 0% who's 100% AI. And that person, so one person is getting punished and another isn't. So when I tell faculty this, a lot of them are like, yeah, but we're going to report them anyway. We're going to do whatever anyway. And one of them said, this is kind of like, you know, four people stole something from you, two of them are on camera and two of them are not, or you're going to let the two go. But I think education is kind of different. Like you really don't want that, you don't want that unfairness. And at the same time, yeah. So I, yeah, I think a lot of people are doing what you're saying is like have a difficult conversation with the student and try to understand. For me, honestly, what I do is I talk to the student and figure out if they understand what they've written about. If they understand what they've written about, and it's 100% AI generated, then they've learned and they can answer questions about it. Then I don't mind how they wrote it because I don't teach writing. But I've asked what people teach writing and that's a different story. But the people do teach writing, understand their students' writing voice and can very easily tell that someone didn't write a paper, whether it's AI or their friend or the internet or whatever. So honestly, I think the conversation with the student about the paper is the biggest thing. So one of the things that we talk about is, for example, annotated bibliography as an assignment. AI can do that really well. Of course, it can make up the references for their other AIs. But there's a plug-in to GPT-4 that will not produce not real references. There's a lot of other AIs that produce real references. Bing does real references. There's one called AOMNI, which is a meta AI. I can't remember what it's called. Like it prompts itself, so you don't even need to write a good prompt. And it finds real stuff and it does an annotated bibliography in a summary for you, which is useful if you're in a hurry to find something out and figure out which papers you want to read, which is really useful for me as a researcher. But then for a student, do you need them to learn how to do a certain thing first before, like the calculator, right? Learn how to do the thing first before they use the AI. Is that what you're teaching? Is it not what you're teaching? Or do you need to teach something different now that this is possible? I say, you know, we used to use note cards and to go through the card catalog in the library. I don't know how old everyone is, but I had that and I wrote one of my papers on my hand. And then, of course, it's a different skill now and you would learn the different skill. Like you skip the easy things and now you do the more complex things. So what I tell my students now is actually what I expect of their writing is higher level because AI can do this. So you have to give me more than this. And I do our class discussions by helping them go deeper. There is no way, no matter how good the AI is, there's no way it's going to relay their experience unless they give it their experience. So if they give it their experience and it writes it better and I'm not teaching writing, then I don't mind. So today's have those conversations about what they've written. If they understand it and they can defend it and they can explain it, then they've learned it. And is that your learning outcome? That's to me. I've heard James Lane talk about a term that has been used in other contexts, but unbundling our assessments. And we've always thought about this one, you know, this big research paper that we have students write as kind of one assessment. But there's actually a lot of skills that are used within that assignment. And we need to unbundle them and look at each one and say, is this something that I really need to teach my students? Do they need to learn it? Do I need to assess the skill? Or is this something we can hand over to the tools and teach the students how to use the tools? Because some of the skills we do want to teach our students, but some of them we may be okay to say, that's why we have tools to handle this. Yeah, yeah. I mean, one example of a way I've used AI that I'm okay with students doing is if they're reading a paper, so a lot of faculty complain when we're liberal arts institutions, so students are from engineering taking history course. And so of course a lot of the history papers are hard for them and they don't want to read them. And I say, you know, you can use a tool like typeset.ai to summarize it for you. I don't know if it's going to be a good summary, but you can prompt it with other questions and stuff. And what I say to faculty is, this is good. I'm not going to give you feedback that you need to assign lower rigor papers. Go ahead and assign the papers you want to assign and let the students use help. Whether that help is a teaching assistant or a friend or an AI is not going to be a huge difference. Even a friend, their teaching assistant can tell them something wrong anyway, in understanding the paper. I've read the paper and I just want to tell someone the summary of it that's not the abstract and I'll use an AI to summarize it for me. That's useful. Or I'll use it to let me know if I want to read it further. Or if you're doing, yeah, so it's about figuring out what is useful. But the issue, this doesn't get to all the equity issues really obviously because again, how does it go? We don't know how it goes about summarizing and figuring out what's important. Microsoft Word used to have an auto summarize tool and I missed it. It was so useful when you had an abstract that was 500 words and you discovered it's supposed to be 300 or something. But it also had a system that wasn't always right. Like it would miss out really important stuff because it wasn't mentioned often enough, for example. And so we know that this kind of thing, biases happen in algorithms, but they can be all kinds of other biases. But what happens, if I can, what happens when the act of reading is a key part of the class? So you need to read a business case study or you need to read a novel or you need to read a novel in a new language. And there are many, many ways now to have some summaries generated. Do those assignments just go away at this point? No, because you just need to not have them write a summary of it. It's what you asked them to do with it. So there are two things I tell faculty about this. One of them is have them do annotation of a reading. They'll select a piece of text and comment on it and comment on connecting it to something else or connecting it to something unique. And also vary your readings instead of always reading white men that everybody's read before and written about before. AI will do very well on Shakespeare and the weekends or whatever, but be more diverse with that. And then the second element is do things other than, you can still do writing if you want, but do things other than just writing and other than just generic writing. Ask them to do something with the reading. Ask them to rewrite it in a way that's very specific to their own context as, I don't know, an African-American nurse. You know what I mean? Those things, I guess the AI could do it, but it should be also exciting enough for the student who want to do it themselves. Yes, it was my former student and a very good friend of mine and I wrote together about why students take shortcuts and how we can as teachers motivate them not to take shortcuts. And I think that a lot of the reasons people take shortcuts are equity issues. A lot of times students think they're not capable of doing it well or well enough or as well as other people. And so are we providing enough support for them? Because AI is one kind of support. Are we offering them more teaching assistance? Are we doing a competitive class where grading is comparative? One of my students once told me, you know what, do you think I cheat on the exams because I haven't studied? I studied, but there was one question whose answer I didn't know and everybody else was cheating. So if I didn't cheat that question, I would have gotten a B and everyone else would have got a name. So the more competitive our classes are, the less time we give students to do things that they need more time on. And of course time is an equity issue because someone who is a mature student who has kids and is working part time has less time than everybody else. Are we being sort of reasonable with our deadlines? Are we being reasonable with what we're asking students? Are we making sure we're scaffolding things so that students get things that they're capable of doing on their own so that they don't need to seek any kind of unauthorized help and how much authorized help are we giving? So I'm going to have to cite James Lang again. I've been rereading his book, Cheating Lesson, which came out 10 years ago. He surveyed the literature on academic integrity and cheating and he was wanting to know, what can we do as faculty or institutions? And his big takeaway was better course design. Like he said, these are the conditions. See if this sounds familiar, Mahad. These are the conditions that tend to lead to cheating and emphasis on performance, high stakes, writing on the outcome, and extrinsic motivation for success and a low expectation of success. You just named all four of those things. I mean, I knew James pretty well. But yeah, a lot of what Jim Lang talks about is get the student engaged in what they're interested. And that's what it is with authentic assessment too. If the student wants to learn this thing, they will want to learn this thing. If you give them enough time and you give them enough choice and I'm having trouble responding to the chat for some reason, but I'm reading some of what's in the chat. Well, there's an awful lot of it. And I'm going to post this. And Derek, thank you for your question. And I'm really glad to see you. Friends, that's an example of a video question and also proof that you don't have to have a beard to ask a question on the program. So we have a few more questions, but Mahad, just to the heads up, we have more questions than we have time for. So we're going to have to, first, those of you who have asked questions that we don't get to, please just let me know if you don't mind including them in that blog post. But also, Mahad, we're going to need to have you back just because obviously you struck quite a nerve. Here is one question, another text question. And this is from Garinda, who asks, how can higher ed institutions define policies around AI without overwhelming faculty and students? What should be their approach? Is a question, how do we create the policies without overwhelming everyone? Yes. You include them in the process? Obviously you can't include every single person, right? But when we were doing ours, we had a lot of faculty attended a lot of our sessions and then we got a group of them together in order to write the faculty one. And then we started writing the student one, but we went back to my students. And the reason I say my students are not all, we said any of these people could give their students for feedback to edit the policy. But we went back to my students and the reason is that I know that if you give people who don't have enough knowledge about something and then you tell them write the policy, what are they basing it on? But my students had been talking about AI for six weeks or something before they saw the policy. So they were building it off of something they had tried and thought about and so on. So give as much voice to a diverse, make sure it's diverse, the group of people you bring in. Make sure it represents different disciplines and also different levels of literacy with AI with the faculty. So with the students, they need to be diverse in all the ways except the literacy one because if they don't have the AI literacy as students and they don't really know what they're talking about, they're taking in a lot of hype. With faculty, it's okay if they have different levels of literacy as in willingness to use AI because they're the ones who are going to have to deal with it so they need to know what they're willing to work with. Well, thank you. And so of course you can have everyone but definitely involve people. If there's a way to get more feedback from more people, that's great. But at least don't just work with administrators or whatever. We have two questions, Maho, which follow up on this really nicely. So I think that the timing is just perfect here. This is for Amanda, who says, how could there be sufficient space, resources, etc. for faculty, students, everyone else to learn the important critical AI literacy? I want to learn this, but my time is stretched as an adjunct. Yeah, of course. This is always a struggle for me as a center for learning and teaching because I don't have in my hands the ability to incentivize beyond intrinsic motivation of faculty. We've been working over summer and winter and every semester we have an assembly hour that's a free slot every day that we use and Tuesday, which is a day that most people don't teach. So we've been trying to give multiple sessions. So it's not like, oh, one session then if you missed it, you're done. So we're trying to give multiple sessions. We're recording them and sharing them out. But then obviously people don't have time to go back and watch. Yes, yes. I do think for adjuncts especially, and I can never make this happen in my institution, but I think that adjuncts especially, it needs to be incentivized monetarily. With full-time faculty, I do think, like we can make faculty go to committees and do that kind of thing, then we can create opportunities, just a lot of opportunities for full-time faculty. But we just have to be very flexible with what those opportunities are, where, when, how, like not everybody wants to sit for an hour, some people want a self-paced course. Like we give them all the different options and then they can choose whether they want to come live or come to a consultation and just sit with me and talk through it, which a lot of people prefer to do than coming to workshops. Or if they want to create their own WhatsApp group or their own community circle and just learn together, because they feel like their field is different, which a lot of people are like, oh, your workshops are too generic for us. Like we're doing this and we need that. But also for adjuncts, I do think adjuncts should be compensated monetarily, but I do not have the power to do that. Because for an adjunct, every moment they spend, there's an opportunity cost for a time they could have been teaching something else or doing consulting work that they get paid for. This seems like a huge, huge equity issue. In the chat, Tom Hame says, in Texas, there's no support for adjuncts. Sarah Sangragoria adds that her full-time faculty want course releases for this. So the time issues. Full-time. Yeah. Well, here's another question to follow up on this. We've clearly hit a major theme, and this is from H. Fitzgerald. We have many faculty who are not especially aware of AI or its possible applications in effects on teaching and learning. I wonder what our responsibility is to engage them in how to go about it. Now, I just want to draw attention to the fact H. Fitzgerald is from the University of Virginia, which is one of the world's leading institutions with tremendous amount of resources. So thinking if they are, I'll put it back up. So if they are experiencing this, think about the majority of institutions which are not so well-researched, are not so well-resourced. Yeah, it's, I think, just the technology curve. It's going to take a while for more people to be aware of something. But the issue is when you open someone's eyes to it, then you're responsible now to help them deal with the trauma. I think when you first discover it, it's a bit traumatic. It's like some people get excited. It depends on what you teach, right, and where you are in terms of tech. But it's a little bit, yeah, it can be a little bit traumatic. It's like some people are like, oh, that's why students' papers were so good last semester or so well-written last semester. That's what's been happening. And then they feel really bad because they didn't know. And so it feels like, but honestly, the media is talking about a lot. So I'm really surprised it's going to be someone who doesn't. Maybe they know it, but they don't think it affects them. A lot of people think, oh, they're going to be able to do anything with my course. And then they come to our workshop and we prompt it. And like, see, it doesn't do all on the prompt. Like, yes, but you can do a better prompt. And here's what happens when you prompt it better. And then you talk to people a little bit more and you realize there's, for example, in psychology, I've learned from many workshops that there's a very particular psychology assignment about personality tests and movies and applying it to a character in a movie. Apparently it's been happening for years and years and years and all psychology classes all over the world. And so, of course, Chatchapati does very well on it, not always, but generally does well. Like you can prompt it a little bit and do it, because so many people have done it before. So a lot of it is that kind of, is then them discovering, oh, what I thought was actually a creative assignment isn't. And that there's a trauma to that too, I think. And it's a fun assignment that I think as a student, I'd want to do. Yes. Like in terms of motivation, it's a fun assignment. Well, engaging movies are a classic pedagogy. It's fun, playful, you can dig into it as deeply as you want. But this is, I think you've all hit a major issue. We have time for one last question. My God, this is just there. We're going so quickly. We don't have really enough time. And this brings us to a particular dynamic here. Stephanie Friedman asks, how can we work towards asking for accountability and a responsibility from the companies developing AI? It worries me that we are complicit in the environmental and labor issues it raises. That worries me a lot as well. The other issue is a lot of people think governments should be the ones who are doing that. Because I think we can vote, those of you who live in democratic societies, can possibly put pressure on governments to do better. I think Europe generally tends to do better at this than the US. Egypt has a really nice strategic AI document that says some really good things. Like if that was what I wanted to say, I would have said it, but I'm going to force the companies to do things. We could stop paying companies. There's so many companies that we, so when this one, there are so many companies that do things, that we are complicit in environmental costs and labor costs. And we continue. And if you start thinking about it, you may not be able to breathe, honestly. And so that's a huge thing. And I don't actually know the answer to that, because I remember when I found out about the Kenyan workers especially. When my daughter was 11, found out about it, she was like, I wish I didn't know that. And I said, do you wish you didn't know that, or do you wish it didn't happen? Because you know what? Sometimes we wish, I don't want to know anyway, right? But actually, no, it did happen. So now you do know, and what are we going to do about it? And how are we going to stop this from happening in the future? And I actually don't know. I don't have an answer to this one, because I even think about, just thinking about myself like, I'm an educational developer. I can't not teach my faculty this, because the students are going to use it, and the students are going to know. No matter what ethical stance we take on this, we still need to use it in order to know it, in order to teach. Right? So, but I'm not using it to play. I think using it over and over and over just to play, I don't do that because of what I know of that. So I'm minimizing my use of it, at least, in the same way that, yeah, minimize your use of your car to reduce your carbon footprint, but not using your car at all, possibly difficult. So maybe reducing, reducing use to a more appropriate level. Yeah. Maha, oh, first of all, Stephanie, thank you for the great question. And Maha, I cannot believe you have somehow distorted space time that we have just finished in a complete hour of our session. And it is remarkable how many ideas you've shared and how passionately you've engaged all of our community members. I'm so delighted to host you. I have a few quick questions for everyone that, including you. First of all, if any of you who have asked questions, who you haven't gotten to them, let me know if you don't want me to post them. Otherwise, I can just email them to Maha. Maha, there's so many ways to find you online. The bottom left of our screen has a link to your blog, which I strongly recommend. Is that the best way for people to keep up with you? Or should they also follow you on Twitter or on Mastodon? Or what would you like? I would also say Twitter. I am still on Twitter. I am on Mastodon, but I don't go there enough. But if people text me a lot on Mastodon, I'll start using it more. But Twitter, definitely, and my blog. Very good. Very good. Can join us on MyFest. We have a lot of AI sessions in MyFest. So, and the price range is zero to zero on up, but zero is an option if you can't afford to be there. Well, that sounds ideal. That sounds ideal. Thank you so much. We're going to have to bring you back because you have so much to offer and you were such a fantastic guest. Thank you so much. Thank you so much for having me. Have a good evening. Have a good evening. Thank you. This is... Don't go away yet, friends. Let me just point you out to where we're headed next. If you want to keep talking about all these myriad issues, please, we can use the hashtag FTTE either on Twitter or Mastodon. And here, you can see as well MyHandle on those different forums. Or you can take a look at... post this on my blog as well. If you'd like to go into our previous sessions, which are taking a look at AI as well as the related issues that have come up from equity to assessment to faculty support, just go to our archive, tinyurl.com slash fdfrchive. If you'd like to look at our upcoming sessions, we have several more coming up on AI as well as one on academic labor and more. Just go to our forum website, forum.futureofeducation.us. Also, if you're interested in this, if you're interested in my take, just take a look at my substack, aiandacademia.substack.com. And also, above all, all of you have so many great ideas, so many great questions, so many challenges that you're all dealing with. I'm grateful that you could all bring them with us through together in this past hour so that we could all think together about them. I hope all of you are staying safe wherever you are, and I hope we'll see you next time online. Take care, everybody, and thank you. Bye-bye.