 Greetings, everyone. Welcome to the Future Trends Forum. I'm delighted to see you all here today. We have some great guests and we have a fascinating subject, and I'm really looking forward to our conversation. We're going to put some acronyms together. We're going to talk about OER and we're going to talk about AI. OER stands for Open Education Resources and AI, of course, stands for Artificial Intelligence. We've been looking at both over our forums in nearly decade-long existence. For a long time, we've been looking at how institutions can create support, use and sustain open education resources, and we've been looking at the intersections between AI and higher education. Now we're going to bring all these together. The guests we have have co-authored a great, great paper about how we can apply the lessons learned in education from working on OER for a long time to what's happening with artificial intelligence. The paper, by the way, is really available. Look at the bottom left corner of your screen. You'll see a kind of tan-colored box. Just press that button, it takes you to the paper right away. We have three authors. They're all great people, very, very different people, and I'm going to bring them up on stage one by one. And then we can start our conversation. So to begin with, let me bring up Anna Mills. Good afternoon from the East Coast, our West Coast friend in the morning. Thank you. How are you? From San Francisco. Oh, excellent, excellent. How are you doing? I'm doing well. I'm very excited to be here. Oh, well, we're absolutely delighted that you can join us. And, you know, I think I mentioned to you, we have a tradition here on the forum where we ask people to introduce themselves not by describing what they have done. That's all great. But what are you going to be doing? What's the next year hold for you? So what are the projects? What are the ideas that are top of mind for you to look at 2024? Well, one thing I'm exploring is how we can share our rough ideas and experiences with teaching about and with AI. So collaborating and trying to bring together different efforts to create a space for that in higher ed. And the other thing is figuring out how to pilot with students a non-profit app called mysafeedback.ai. So looking at a way to, you know, how do we provide enough guidance and guardrails so that we're encouraging a use of AI that sort of supports students developing their own voice, their own sense of critical thinking, their own experience with the writing process and critical perspective on AI at the same time. So playing with that app and seeing where that goes. Oh, I'd love to see that. That sounds really cool. And for the first project, what kind of space are you thinking about, you know, an online venue or something like that? Yeah, definitely. I'm kind of building on what I did with the crowdsourced resource list looking at what are some sort of very easy access, easy to contribute to and easy to search ways, you know, in which organization would host that and how we collaborate among organizations so that it's really easy just to jot down some notes on here's what I tried with my students and here's how it went. And then tag that according to discipline and, you know, pedagogical approach and all other kinds of tags. That sounds fascinating. When you get something of that to share, please share with us so that we can spread the word. Definitely. I will. Thank you. Very good. Well, hang on a second. And I want to make sure that we can bring on board your co-authors. So stay tight or sit tight and we will bring up your colleague Lance. Hang on a second to misfire. They're a little dramatically going to bring him up there right now. So Lance Eaton, welcome, sir. Hello. So happy to be here with obviously Anna and Mahath yourself and all the wonderful folks here. Well, there's two great things. One is that you're with College Unbound, which is an amazing, amazing institution. And the second is you have a beard. And of course that makes you privileged among all guests. Where are you? Are you in Rhode Island with College Unbound? Are you elsewhere? Yeah. I'm in Rhode Island Providence. College Unbound is primarily in Providence, but we now have iterations in Philadelphia, Camden, Chicago, and we'll have like two more cities probably within the next year. Wow. Camden, what a great idea. Oh, wow. Well, I love Providence, one of my favorite towns. As an HP Lovecraft fan, I have to go there and look for tentacles and things. But I have to ask, what are you working on, Lance, for the next year? What are the big projects and the big topics for you? A few friends in the crowd know this. Finishing my dissertation is number one. I am close in finishing the analysis phase. Outside of that, there's continuing some of the work I talked about in the article itself where there's a couple of the students that I'm actually working with, and we're still, I mean, it's been months now, we continue to meet bi-weekly and are writing some things based upon what we did around developing policies around generative AI. So really working with them to hopefully get some of those pieces out and also presenting with them. They've done a couple of collaborative keynotes with me and one of them is going to be with me on a leadership panel at EDUCAUSE this year. So looking for additional opportunities to really elevate their voice. And then more thinking about how we grapple with generative AI, like other people here, particularly trying to reconcile what's within our control and what is just kind of the larger cultural economic forces that create the need or the franticness to lean on these tools, both for faculty and students. This is true. Well, that last point is such a crucial topic, which I think we'll be touching on in the next hour. But I love the work that you're doing with your students. That's fantastic. Maybe down the road a bit. Once you've finished the dissertation and we can refer to you as Dr. Eaton. Dr. DeBee Eaton, I would love to have if you could, maybe we could do a forum session with some college unbound students on AI. Yeah. Yeah, I'd like that a lot. Well, welcome. I'm glad to see you here. And we have one more. We will round out our troika and we can bring out one of our most popular forum guests coming to us from the most extreme time zone, I think of all of us right now. And this is our wonderful Maha Bali. Good evening, Maha. Hi. Salaamu alaikum, everyone, because that works for all times. It does. It does. Well, I'm so glad you can be here again. We keep bringing you on as a guest and every time you rock the house. Maha, a few weeks ago, I think you were on and we asked you what you were working on. And it was roughly 100,000 things. And you had so many projects going on. If I could, if I could ask you, in terms of your academic work, supporting faculty at American University in Cairo, what are some of the big issues that are looming large right now as the semester starts? There's a lot of issues, definitely mostly related to artificial intelligence. But I think there's also a lot of socio-emotional issues, a lot of burnout, and then the combination of AI and having to respond to AI, people being burnt out, not warning to spend the time learning it. Because justifiably, they're burnt out. And at the same time, we're like, you just need to learn what's going on here so you can know what you need to do with your students. And of course, they're exhausted about this whole changing their assessments. And they're frustrated by us telling them, don't use the AI detectors because they're not accurate and it's going to be unfair to students. And it's this very difficult space of trying to be fair to students, but also trying to be fair to the faculty, trying to be there to support them. The best way to support them is to ask them to learn it. And that's where's the time and the incentive going to come from. And I think especially of adjuncts who are highly paid less, have multiple other commitments. It's a really big ask and we need to figure out what to do with this. And I feel especially for people who are teaching language at the early levels, where you can't tell them integrate a little bit of AI. They're not going to be able to meet their learning outcomes. They really need to teach these foundations and all of a sudden they have to do it differently. It's a lot of pressure on them. And no matter what kind of ethical concerns we have about AI, ethical objections, no matter how much we try to teach critical AI literacy, explain to them about the biases and all that, the students are still not all mature enough to stay away from it when they need to even when it's for their own good. And so we're all about trying to figure out how to make them use it and still learn. Or let them use it and still learn. So that's where I'm at on the academic side, but also really, really trying to get people to remember that in most things, unless you're teaching language or writing, the learning is not the writing. The writing is a representation. We are learning and then we write to represent that. There's so many ways to keep representing it in different ways as well as the writing. So that if AI is going to help with the writing, make sure that the actual thinking is happening outside of the AI. Make sure that you're having conversations, make sure they're having an authentic learning experience that they are writing about, in which case the AI won't be able to write it. And I'll tell you the crazy things later that I'm trying to do in my class to make sure that the AI can't do it. Which class? So I teach a digital literacy and intercultural learning class. Oh right, right, right. So one of the things I told you about last time is that my students are going to go out and talk about AI and some of the other socio-emotional learning stuff that we learn in class. And then they're going to reflect on that. And there's no way chat GPT knows what they did in the school. So I hope those reflections will be authentic. And the other thing that is a very strange thing that I'm adding to my class this semester is we're going to adopt a little plot of land on campus, which is something I've been doing myself. I'm going to involve my students in it so that they're actually planting and following their plans which we would think has nothing to do with the class. But I'm going to make it have something to do with the class related to community building and socio-emotional well-being. But also, how can you use AI to help you identify what is a weed and what's not a weed and how do you use social media to learn about planting and how do you use social media to let other people know about sustainable farming and to learn about these kinds of things. So I'm trying to make it work. And also the AI can't write about this, I hope. I end up using AI, you know? Well, it's a great project. I can't wait to see that. Personally, I think that kind of biophilic design is just excellent and a real win. You could just call it biophilic, like love and mentor? I'll put this in the chat. I like that. I'm going to teach them a new word on Monday. Yeah, there's at least one consulting business that just does biophilic design for education. The Icelandic musician Bjork is a huge fan of biophilic design. Really? The simple idea is just including as much bio, as much life into design as possible. Plants, animals, nature. I'm coming to you from a room right now which is enclosed. It's an interior room, so it doesn't have any windows. All I have is the built environment around me. I'm going to introduce windows. Sounds of the outdoors. Colors, more green, for example. I'm not doing enough justice. Take a look. I know you have plenty of time to research entirely on the topic. All three of you have done all of us a great service with your article about OER. Friends, if you're new to the forum, what I'm going to do is ask our great guests just a couple of questions to get things rolling. But what I would like you to do is I'd like you to think about the questions and comments you'd like to put forward. As our guests respond and as you reflect on their responses, please think about the questions you'd like to ask. Again, please use the chat box, but best of all, if you have something you want to say out loud, you can click that raised hand or put it in the Q&A box. My first question is the open education movement, we can date this back maybe 20-plus years. This is a long time of many, many educators of all kinds, librarians, faculty, technologists and so on doing lots and lots of research and also lots of production. We've had companies, nonprofits and just tons and tons of open education content available in multiple media, multiple formats. That's it. I think it's a brilliant move to tap that body of knowledge for thinking about AI. What are a couple of the first connections that you made when you started drawing these two connections together, these two bodies, the OER and AI, what are a couple of the first connections that occurred to you? Was it economic sustainability? Was it trying to come up with a proper technology? Where did you start? Since there's three of you, one of you get to jump in or if all of you fall silent, I'm going to pick on Lance. I'll say something really quick. He approached me first to start writing about this and she had a particular angle but we expanded that angle to not be about OER. It's about open education and open educational practices. It's not about OER. The OER is like textbooks or open materials, but we're talking about open practices as in the communities that we've been building for all these years that have supported each other in this moment of a shock of uncertainty where none of us knew anything. I'm like an expert to go to that had a book. There were, but it wasn't enough. Hi, Brent, I see you. There are experts, but they weren't expert on this moment with this technology, with that level of expansion, with that impact. We all supported each other on social media, Twitter, Instagram, TikTok, all of that. It was about the communities and the people and the processes of sharing with vulnerability because you're not sharing a ready-made book that's perfected and edited and everything. I have no idea how to side AI. Shall I do it this way? Oh no, that's not a good idea. I'm being willing to do that. So that's, to me, that's the angle that I think is the key thing here. Yes, go ahead, Anna. Please, thank you. Yeah, I love that. And I think that the big learning that I had in the process of working on this paper with Maha and Lance was that I had come to it from, I've written a textbook on writing that was OER and that was a revelation because I could adapt other people's materials and I could share my own and other people could adapt and I worked with 14 collaborators. We were writing together and annotating and I continually adapting it based on student feedback. So there's this sense that it's very rapid, organic, flexible, and that potential could be applied to AI and was very much needed. So part of it, I came to it just from like, oh, well, I could update my textbook and other people could do that using open licenses that facilitate this kind of collaboration and rapid response. And then Maha made me realize that that was just sort of one piece of this bigger picture of an ethos of open educational practices that really was connected to these practices of digital collaboration that people became more familiar with during the pandemic but that people like Maha and Lance had already been promoting the idea of a personal learning network and sharing on social media and through listservs and groups like this and that those two things were really connected and had incredible synergy together. The open licenses, the OER idea and the idea of involving students, the open pedagogy really synergized with that kind of learning network through digital collaboration and higher ed and that those things come together to make this kind of really flexible and very positive form of response to AI that could give us some more hope, I think, when we feel overwhelmed and we feel the uncertainty. So yeah, that was my learning. Wow, so involving students as co-producers and co-creators and then the communities of support that we needed to make OER work or make open education work that's a great correction Maha and also that bullying is to be open and vulnerable to take risks to make mistakes to admit not being the 100% expert in this field that's a great set of overlaps, that's excellent that's excellent. Please say that, go ahead. I'll just add that when Maha was elevated to that level of open-end practices that was such a clicking point for me and that there was just so much of my experience in working and doing instructional design faculty development, it's often your departments of one or two and being able over the last 12 years developing that network of other people are doing things, figuring things out there's lots of great groups out there there are a lot of great new groups that formed and just wanting to I didn't want to hold I didn't want to hold it in I wanted other people to take things and run with it I wanted to make sure that we all did that there was iterations and also people would do things that iterated from us and then we would be re-inspired other things so there's this wonderful feedback loops that also occur within all this so the feedback loops between projects between practitioners, between communities yeah excellent, excellent this is a terrific set of connections and then one more aspect which is openness with students we talk about this first of all one very funny thing is the way I learned how to use chat GPT in Egypt which it should be blocked normally you wouldn't be able to get it it's going to talk to students by like school or university students explaining how to use VPN and pretend you have a number that's outside Egypt and so on to get in not from people like my age so that's one funny thing but Lance has done really amazing things with students one of one of my covert agendas in everything I do is whenever I meet with people I try to give them something free that they can use that's handy so in today's meeting Maha has just provided that if you have issues which at GPT accessing it there we go thank you let me ask a second question because your response to the furniture is so great what what are some of the ways that we can think about AI of all kinds as enterprises that is as a college or university or library or system as they respond as an entity I was really struck Lance by your point about being a department of one and I know this feeling very well I think many people who are involved in this conversation know that feeling of being just one person making a decision either in support or in practice or creation but what have we learned from open education about how entire institutions respond as an enterprise and how that can apply to how we engage with artificial intelligence I mean I'll say this is some of the concern challenge that I have is on the one hand yes it can be helpful to reduce the load and I think that's some of where this will come in but I also see it as like it does now but in five years from now now we're all needing to use AI there's doubly more to produce three times as much I think that's the thing I'm worried about as it gets to that enterprise level at organizations is yes it will help us and yet when we start to take on bigger because this will help us effectively finish or do tasks quicker or sooner then the demand is for us to do more quicker and sooner so the way I describe it is maybe you only had to do quarterly report and now it's going to be a weekly report because now you have these tools and so there's a bigger demand on your work and so we don't really this is the thing I worry about our system as it is because it won't solve for making work less stressful because the new bar will just be set as you're doubly or triply reactive and that's like I know it's not exactly answering your question but it's the thing that I worry about like if we are already burnt out we're already feeling all these things like I'm not hearing anywhere where it's just like oh like this will mean we can slow down a little bit or like feel like or trust that that will happen I feel like it will just be like ooh now you can do more so I say if you save time to make a little more stuff Anne Marie Scott was on this 10 minute I think was it 10 minute interview with Tim Fongs if someone can find the link to that on LinkedIn it's a great interview but she's saying if we're going to use AI to do the crap that we don't want to do maybe we should just stop doing that crap because you know what if I can prompt AI to do what you're asking me to do then you can prompt AI to do it for you so why should I do it you do it someone asked them to write a speech about something that they really didn't want to write anything about so they gave it to some AI that's better than chat you can see whose name I forget right now I think it's called Alphabet anyway and it gave a good speech so I'm like great so that person could have done that then they don't need you so what's the point so maybe nobody needs to hear that speech if nobody wants to write it you know what I mean well that's one model for working with AI but another would be augmentation and sort of dialogue and using it to push our own thinking and to develop our own ideas and that can be a gray murky area between that and letting it take over but there could be ways to work with it that are not just where we have a sense of the intrinsic meaning of what we're doing but we want a sounding board or we want something to speed up the formal aspects of the process and I mean that's what I would want my students to use it for if they use it right rather than just to automate something that's not meaningful but we've already seen that academic professionals of all levels are ready to do that very thing and it might be something that we shouldn't have to do but I think Lawrence you raise a crucial point there's a principle from social science called Jevons Paradox which is where when you make something more efficient or easier to use people use more of it and it stops being so efficient so if you have a two lane road and you make it four lanes well you think this is great, there'll be more free space well more people will drive and it fills up again so if we have the technology to make us more efficient we can use it which brings us back to the questions of overwork and stress that you've been addressing please go ahead so I am on that side I certainly agree with Anna as like as a collaborator like my partner I see her use it in really great creative ways exactly that as a thought partner is trying to like think through things or like gather initial or topics she's exploring and I think that's kind of like the that I feel is like the aspirational space I want to be in and really seeing and thinking about like the ways in talking with my students there were some great ways that they made sense of it and they were like oh this has been helpful I had this one student she was great she was just like I took ChatGPT I threw my notes into it to organize them and you know people will be like well she should be able to organize her own notes and like she's 30 plus years old she understands who she is and she's just like I can try to do that or I can just do this and be further along in the process to get that paper done or things like that so like I think there's a lot of those types of wins that feel really exciting and I think that's part of why I wanted to do the course and it's part of why I continue to be in conversation with students because we learn those things and see how they get implemented in different places of work like even for myself and this is where I think about helping faculty or institutions look at it is like there are just some things that like oh that's so much easier you know the givey the simplest thing that like when talking to faculty about them like we all have to do at the beginning of the semester we've all done this probably in the last few weeks we've all built up the calendar and we toggle between the calendar and like writing the dates of every Tuesday or whatever days we're having class or you can just ask chat GBT to generate that list so like there's lots of those I see it as a task minimizer like those things that like we need to do and we can take it from 10 minutes to like one minute and I think there's lots of values offered at lots of different parts of institutions and organizations for that. You inspired me to also add something about like what kind of tasks can be equalized when we use AI so one of the things I don't know if I've said this before on the forum but I'll say it again if you use AI to help you summarize a reading so Bing allows you to summarize a website or typeset.io I think will allow you to upload a few of your paper or something and ask questions about it and it will answer them for you and that is very useful for especially non-native speakers in that faculty in my institution which is mostly in English will give students sometimes readings that are jargony and above their reading level and they're not willing to write to give them something that's less and the students are not going to understand it better just because you're forcing them to read it that's not going to happen and so those things will make those readings more accessible to students without asking faculty to assign them different readings and at the same time it's not anywhere near the area of plagiarism as long as you're not asking students to summarize for their assignments which is I mean with the assignment really unless you're teaching them the actual task of summarizing so then they understand the reading and then they can actually have a good discussion with you hopefully things like Bing and that kind of AI is not hallucinating in the way the chat I'm hoping I've tried it on papers I've read and it does help and it helps me too as an academic I usually can read maybe people will lose the ability to can read and maybe they'll lose out on some details that are important because Word used to have an autosummarized function that was like 80% good you know it would miss out something but if you're not a good reader anyway you're going to miss out some things when you're reading anyway they might be different things than what the AI misses out on but I think for non-native speakers this is the difference between reading it or not reading it at all so if you get 50% that's already you've gotten somewhere I think and maybe this becomes a new process where people's first experience with something is the AI summary of it and then when they want to go back to it to dive more deeply into it then they actually figure it out when I was doing my PhD I would read the book reviews about a book before I read the book this is a more critical aspect of it that also helped me be critical of the reading but you know what I'm saying we all have shortcuts that we use that are actually smart and help our learning so it's about trying to find that space I'll add to that two things you mentioned non-native speakers but even native speakers I mean it took me 10 years to read Foucault History of Sexuality I might as well have learned French and tried to learn it that way the only reason I got through it was an audio book and yet if this tool was around it could have helped me so much more in the class that I was actually taking it I highlight that as a like that's a really good value added to this being able to really find out if you're taking a class where Foucault is in the conversation you know there's people in that class that are in love and can read Foucault in 15 different languages and you're going to feel like a fool just to even ask am I reading this right that's my signal of reading it upside down or whatever and then again back to the teachers the thing I like is it can be the infinite example generator we love getting a student's paper that's a really good example and saying can I use your paper I want to use it as a great example but we're never going to ask a student who writes a bad paper as an example of a bad paper that's bad form but we can have this tool create these types of examples like a range of what might be good and bad examples for us to use to help the students understand like what's a right way to do this and what's a way that probably isn't going to go well so I think those are other ways at least within the teaching space that I find it really exciting interesting this is terrific I wouldn't ask you more questions but you were all so good and we have questions that have come in from the audience which is what we're really here for so let me just bring up the first one from Kirsten Helmer and this is a really really good question comes back to something we just started talking about here would you say the ethos of open educational practices has always been challenging the idea of authorship and intellectual property and now AI is pushing us to develop that further speaking of Foucault though oh my gosh it's such a different way of doing that I think with open education because with open education the person who creates the thing has chosen with their own agency to share it in very particular ways to make it available to others and they've chosen how they, the creative commons licenses tell you how you want someone to use your work so a lot of times I don't want my work to be commercially I want to provide things for free and then have someone else sell it for example but AI one of the things this is one of the things I really don't like is if we tell students to cite that they've used AI somewhere they can do that but I don't know where AI got it AI synthesizing stuff from all over the web and I remember this very particular thing I was asking you about white supremacy culture and it said something there's a very particular author, Tema Okun and her co-author who wrote that and I asked AI so where'd you get that from no idea I have no idea and I say so who's the person but it won't directly tell you where it got anything from sometimes it's like synthesized from all over the web so it's like intro, psychology stuff of course it's not one person but something like that's very specific to that so where, so that's the problem and of course the way it affects people like the visual AI and the effect on artists is truly problematic and then oh my god the deep fakes with people's voices and singers and anybody being able to create a song voice like those things are really problematic and dangerous and can be used in very harmful ways and have been used in very harmful ways and then I'm thinking about how am I going to teach fake news now going forward with all of this so I've gone a lot quite far away from the question but because I think that ethos I thought it was ethos of open education and the authorship and making something open it's completely different and AI did everything without permission right, so I think that's the answer thank you I mean we can, you know it's not over when it comes to the question of AI and permission and labeling of datasets and who has the rights to the outputs and how transparent the algorithm is and all of that and so I think that we can take some lessons about that kind of transparency and labeling from the development of OER and hopefully push for that I mean that's really building on these fundamental academic values about citing our sources and that's an open question for policy and democratic oversight of AI I guess I'm a little more hopeful or I want to have more hope about being able to use AI and label when we've used it and have some and have some way to trace back get more information about how it created the output and whether it had the rights to and all of that I probably have the more I'll go with extreme view on this and I'll preface this with my dissertation is focusing on academic piracy and how scholars make use of places like Psyhub and LibJet and I have a very very challenging relationship with copyright and kind of how and where it exists today where like we still can't get like there's stuff that's still not in the public domain from people that died like Hemingway stuff not that I'm trying to hold him up but as an example pay walled more or less and so there's a way I see like the open education OER movement open access movement has been this like ground swell ground up approach to really trying to address that and trying to change that and I like I love it I live it I all of the stuff that I do I put into creative commons licenses I like do talks I'll make sure the text gets up on my blog which is creative commons license so like I am fully there and I'm like there's a part of me that is just I keep wondering like how do we fix this thing that feels like solidified in terms of the way copyright stands and what it's original attention is and there's a way I wonder if AI is going to blow it up in a way that could it like I can see it going lots of different ways but could it blow it up in a way that we're back to like rethinking it and maybe renegotiating what the terms are because it just feels so out of proportion of like 70 years after the author dies and that's largely for the benefit of the companies not the individuals or the family of the creators so for me there's like is this like is this the thing that like breaks that in a way that's helpful outside of that I will go back I 1000% endorse or like have those same concerns that Mahab brought up around like how it can be misused how it can be exploiting lots of people in lots of different ways but like there's also a part of me that's like oh I would love it if this shattered our contemporary approach to copyright well it sounds like we're just helping you write your dissertation or we're making it worse I hope at least the former Kirsten would have fantastic question and the three of you each taking different angles of approach on this there's a lot to it thank you Kirsten and thank you the three of you we have more questions coming in which is just as always delightful and this is one that goes back to the question of sustainability I think this is Guy Wilson what concerns do you have as educational tech companies add more generative AI features will this lead to less flexible approaches will be just more the same virginity as we've seen from them well my concern is whether they're building in critical AI literacy and labeling of AI text as as they do that so are they prompting students are they creating experiences for students to recognize the flaws in the AI if they're interacting with it are they directing students to learn about where that output coming from and how might it be biased and how might I like copyright are they promoting interrogating the privacy policies around you know if they're interfacing with open AI are they looking to have students understand where their data is going so those are some some questions I would start with really good questions yeah Maho or Lance do you want to add anything to that I mean I have a general default skepticism of ad tech companies just because though like the ways they many again this is not all but many of them you know offer up or you use student data you like use that as part of like the labor that as part of the ways to improve or make the product more connective the ways that you know and I think about some of the publishers who are now like ad tech companies who like it becomes there's no cheap alternative to ebook or all the extra stuff that comes with it in general I have that skepticism so them rolling in AI makes me more concerned than the other piece is how much more those things get created that then get sell I mean and who is it forgetting the woman's last name but first name was Taylor she's done a couple a good series in chronicle of higher ed around like courseware and my concern is around how much where increasingly takes away student agency or soon and instructor agency as it becomes required or becomes license or things like that so I know I'm more talking in general then specifically AI but I just feel like AI will again up and increased those concerns that I don't feel get well addressed in higher ed. Speaking of agency will people be able to opt out because a lot of this stuff AI stuff is introduced in stealth mode and you just find it there and nobody gets to say they want to use it or not use it or recognize that it's even being used in the first place so I think that's an issue I wonder why there aren't conversations at all sort of I mean there are but there isn't about chat specifically that I've seen around is there ever going to be talk about more interpretability or sort of transparency and we don't even know right we don't know which data sets it was trained on we know some old stuff how it was trained we know and yeah so I'm kind of like why isn't there more transparency on this if more tech companies are using something we don't fully understand yeah or building on stuff that's already problematic in all these ways I'm concerned about that I also don't know what kind of climate impact this all has so there's a few people who are talking about this like who you should have on here at some point Brian and very few of us are talking about we don't know how bad the climate impact is of having trained AI we don't know how bad it is every time it gets reused and every time the API is used and I don't understand this stuff very well but we should be concerned about climate so we should be concerned about this thing that we say is inevitable and that we can't run away from and it's going to grow and all that if we stop using plastic and then this thing you know if we're working well on the plastic side and then this thing is coming and people are unaware of that kind of impact very little connection between the two of them so again so empowerment opting out agency critical thinking these are all huge huge dimensions of this guy I'm so glad you asked this question I'm so glad three of you gave us a really good skeptical way of thinking about how this may unfold oh please please go ahead sorry I just wanted to add that I do think there's potential for for building student agency like the my essay feedback app that I'm consulting on students the ideas that instructors can write their own prompts for feedback and share those as public domain and then there might be potential for students to write their own feedback prompts and then continue on that chat session so that there's a way that the software could enable a kind of a more organic and learner centered interaction with AI that's perfect and it is like you are looking over my shoulder at the queue of questions because check out check out the next question we have this is from our good friend John Holmbeck doesn't AI open the possibility for learners to take radical control of their learning and so I guess I mean that's a question we can think of in general but also to see how that connects to open education and the aspects of student empowerment that you all discussed I mean yes with the caveat right of like taking yes if it can be to the degree that it can be reliable and I think that's one of the challenges is if you if you are not if you don't have some working knowledge of the area that you're exploring it's not always clear that you're actually getting the right information the right knowledge and understanding and so yes if we are training them to you know if we are making sure they have the develop those critical AI literacy skills I think it can do that and I think that's part of as I said about my partner like I watch her use it in really smart ways and like she knows not to trust it and also like to elicit really complex thinking and ideas that like she is in conversation with I like part of what I like about it and I've said in some spaces is like it can be a tool that like unlocks the hidden curriculum of the world as I think about you know that the example I always go to is the cover letter you know the cover letter is the most BS piece of writing we all have to do the like what right like what you have so much of the rhetorics of that that thing has very little often to do with the job you are applying for but like there's all these like gestures you have to make in that cover letter so to me it's like wow you know having something to help you get through that feels really great really powerful and like yes can help somebody learn of like oh this is this is if I'm going to tell my story in the role of applying for a job like this is this is how that should look in you know I think about that both for like multi language learners I think for people that are neurodiverse I think for people who just look at a blank page and want to cry that this gives lots of those spaces too and can be really powerful but I just I think there is that challenge of know developing the skills to know where and when then I think there's that conversation for increased learning and increase being able to like use that in lots of different spaces to build one one's engagement with the world better and from when we talk about tying it to education like one thing I'm always thinking about is open educational or open pedagogical practices and really thinking about how how making sure what we do in the classroom is something they can take with them outside like outside the world or like the assignments have meaning beyond just like checking a box or like getting a point but are applicable and usable elsewhere so I see it tied in well with that as well I'm trying to compare this concepts of learning with and from AI with other things that we had before that gave learners a lot of agency over the learning right and you just mentioned open educational practices right so what's the difference with me asking a question to chat GPT or asking it publicly on Twitter or X or whatever it's called or mastered on or wherever right so I might get the same answer from Lance and Anna's answer maybe I'll get an even longer answer from chat GPT possibly if it's something that's been talked about a lot before I'll probably find something meaningful from chat GPT but when I asked Anna or Lance I actually made a connection with another human being and that connection becomes a relationship and there's so much more to it than the answer to the question it's not an instrumental interaction it's not just a technical Q&A you could have asked that question on Reddit you could have checked Wikipedia you know those are all different ways we were doing this we could have checked the internet the difference between the internet and Wikipedia and Anna or Lance is that I can check the credibility of Anna and Lance but who are these people do they really know what they're talking about I can check out what else they've said before checking out what chat GPT has said before doesn't really help me know his credibility Wikipedia has editors now we know it's okay the first step for certain things but it's marked when something is low quality is missing references we don't have that for chat GPT we don't know when it gives us an answer why it's so confident about this answer or can it actually tell us I'm not really sure but that's maybe what you're looking for instead of just sounding so confident all the time you know that kind of thing so for a learner who doesn't have the critical literacy yet I think it's actually problematic because it sounds like they're talking to a person but they aren't and a lot of us I think a lot of us who it has a nice tone that makes you like it it's just problematic you start to feel like you're talking to a human being although it's not and you say a lot of people say thank you to it and a lot of people say thank you Siri right so there's and I always think there's an element of focusing on learning as knowledge and not on the socio-emotional aspects of learning and even if I know there's affective AI and a very good friend of mine is one of the leaders in affective AI and I know it was developed for people with autism and probably it's helping people with autism but I think for most of us who are not socially who can build social relationships and without the support of technology I mean using Twitter isn't a support of technology per se you know what I mean I think learning is not about the knowledge of the exchange of information learning is a lot more than the exchange of information yeah but how do we build in that sense of learning happens in relationship and relationship is key to learning with the possible uses that are exciting to students in education like how do we allow those things to coexist and not take away and have AI not take away from relationship in learning I wonder if maybe we can draw a lesson from the literature around educational gaming and one of the lessons is usually that games can be terrific pedagogical objects they can really really do a lot but they really need an instructor to help make them actually really shine and to really work and maybe this is one of the roles for instructors is to really help students engage with AI in a way that returns to them the social and emotional learning that I was talking about and the inter-communal relationship they were just talking about we're almost out of time and we have so many questions coming in and I want to forward them the ones that we don't get to the three of you because they're really good I want to have a chance to ask this one last question because this one ties up to what you were just saying just a minute ago this is from Joseph Robert Shaw the most alarming future of AI is the tendency to hallucinate facts are not valid what does open education plan to do to safeguard this of course we are speaking on behalf of the entire open education community because the shining stars of open education and we all have one direction universal this has been an issue with open education there's quality control and there have been different ways of doing it you can think about Merleau with its reading system for example and then lots of conversations and trying to convince faculty to use OER in different ways but how do we apply that heritage that practice to AI's lamentable problem of making stuff up well I think what we need to do is that critical AI literacy around how language models work there's a fundamental structure there that creates plausible rather than true outputs it's designed for plausibility which lends itself to fabrication I don't think they have a solution to that yet there are some sort of workarounds and we can teach the workarounds and emphasize that but I think we want to have students have that experience of seeing where it's making something up and seeing how it's disconnected from experience and reality and a search for truth that's still a very human process that's how I see it yeah so many thoughts I will try to keep them concise I guess I would say within open education or there's something about like I want to call I want to call bullshit on like this idea of like the worries about data or about about quality control in open education being something that's like more of a problem there than elsewhere because we have also transparency and we have the ability to edit and update in a way that like there's all sorts of things in traditional publishing and producing of knowledge that we don't right like so I think there's there continues to be this like misalignment or maligning of of open educational practices and I think the piece about AI and it's it's hallucinating which I don't like that term but it's just presenting information that isn't accurate like it's part of the critical it's part of the critical literacy but it's just in general like literacy like critical literacy of like there's plenty of things from like non AI generated stuff that should be questioned and wondering about the truth of and my good glaring example it's a couple years old but like when you have textbooks in the United States that are framing enslaved people as as immigrants that came here like like that to me like people can say well it's technically true and I'm like it is not and so that like I guess it's like that is something we always always have to figure out in fight with and we we had students presenting us with wrong information well before this like that's always like how many there's nobody here who's taught a course who hasn't had students present wrong information that they got from somewhere and so I understand it and yes it feels like it's much more exponentially challenging but it's no different and I'll just go back to you know autumn canes who's a friend of several of us and and wonderful thinker in this and like she just go you know her one of her latest blog posts was like you know what's going to help us with this is good pedagogy right like like the same things that worked before generative AI are going to be the same things we should be leaning on as we're in this nice response nice response thank you thank you well I think you get the last word because we're we're right at the end of the hour well I was just I was just talking the chat awesome autumn follow her for her great work on this and it's autumn with a double M and her thinking it's very she writes very clearly and and very concisely and she thinks in different ways that are worth all of us listening to what she has to say and I also want to encourage people to follow Anne-Marie Scott on this there's a lot of men talking about AI listen to the women and what they have to say honestly and listen to people who are outside the US and what they have to say they'll they're looking at it in a different way and one of the problems with AI is that it's been trained on flawed data that is also mainly from a white western male perspective and so and yeah so it's flawed because information is flawed and epistemically flawed and and as Lance was saying also has a lot of problems with it so it's just building on that and just doing it on another level so well that does bring us back to the question transparency again and as you as you point out mother the lack of transparency around black box AI versus the transparency that we see in open education I hate to pause this right now but we have blasted past the top of the hour so we're going to have to pause and draw a curtain on this thank you the three of you for a fantastic conversation so many great ideas so many terrific points how do we keep up with all of you what's the best way to find out more about you I know maho you're on a whole series of media of social media from Mazda on Twitter and your blog of course easier on my blog since we don't know what's going to happen with Twitter very good blog.mahabelli.me and Lance how do we find you I have all my ramblings on my blog by anyothernerd.com I'll throw that in the chat excellent excellent and Anna how about you how do we keep up with you on the west coast I just put my Twitter and my LinkedIn in the chat and well and and I would I would love to follow up with each of you on some of the different points here because this is this has been terrific thank you all for joining us thank you for this very powerful article and we hope to see all of you very soon and to follow up with each of your practices Anna please let us know when your two different projects go live so that we can we can share them and learn more and the same is true for all of you Lance good luck becoming Dr. Eden and good luck with everything you're doing now friends don't leave yet I have to show you where we're headed next if you want to keep talking about these issues everything from transparency to add tech companies to agency to flaws in AI to what we can learn from the educational educational experience please keep the conversation going use the hashtag FTTE you can hit me on Twitter there or on Mastodon we'd be glad to hear more from you on this if you'd like to go to our previous session is taking a look at AI and open and other topics just go to tinyurl.com slash FTF archive or go to our website in the meantime if you want to go to our website you can also find out what's coming up next we have sessions on academic labor sessions on AI on how to meet on that student needs again just go to form the future of education that us for that if you'd like to learn more about the AI you want to hear from me on this just go to a sub stack AI and academia dot sub stack dot com and in the meantime let me thank everybody for great questions really really good thoughts much appreciated as always it's a real pleasure and a really productive delight to think through these issues with all of you I hope you're all safe as the fall semester hurdles on I hope you're all productive and well take care we'll see you next time online bye bye