 Welcome, everyone. We're so pleased to have you here today. We're very happy to present AI and OER, Redefining Education. Before I introduce our moderator for today's panel, very briefly, the Community College Consortium for Open Educational Resources, that's quite a mouthful. So we call it CCC OER. It's a community of practice and a regional node of open education global. We have over a hundred institutions and organizations that are members, and this represents millions of students. We support community colleges across the U.S. and Canada by promoting awareness and adoption of open educational policies, practices and resources, and we have an executive committee and several subcommittees that help guide our work. And we do this through just to highlight a couple monthly professional development webinars, open ed community listserv with over 2,000 members, which will give you more information about a little bit later, because we hope you'll join, and by engaging the open education community through advocacy events and collaborating on projects. It is my pleasure to introduce you to our moderator, Lance Eaton, from College Unbound. And Lance, I'm going to let you introduce yourself, so please take it away. Awesome, thank you so much, and I think we can stop sharing screen. So hey, folks, I am Lance Eaton. I am director of faculty development and innovation at College Unbound. If your next question is what is College Unbound, that's totally understandable. It's a younger college. We are in Providence. We're doing really cool stuff that's really student oriented, and I will stop there. Otherwise, I will go on about that for about an hour. I am really excited for today's discussion, not just because the topic is really exciting, but to be in conversation with these two amazing thinkers, into whom I have both the honor to call colleagues and friends. So let's start today's conversation. Let's see, did we lose Lance? He's a matter of mystery. We'll give him another minute to come back. I will go ahead and start sharing the screen again, just because I can, while Lance is coming back on board, I can go ahead and basically I'd like to ask our panelists to introduce yourselves and we're actually going to copy and paste your bios and put them into the chat so everyone has access to your wonderful links. So let's start with Anna. Will you please introduce yourself? Sure. Hello, everyone. I'm Anna Mills. I teach English at College of Marin and Kenyatta College and previously at City College of San Francisco. All two-year colleges in the San Francisco Bay Area. And I have written an OER textbook called How Arguments Work with the support of the Academic Senate for California Community Colleges OER Initiative. And I think my mentor, Shagun Kar, is here today in that project. And I've sort of become, I've created a resource list around AI and education, and I've been writing and speaking on that topic over the last year and a half. And yeah, so I think that's probably enough about me for the moment. So I'll turn it over to Peter to introduce yourself. Thank you, Anna. Hi, I'm Peter Shea. I'm a learning designer, a teacher of college writing, 30 years, and a director of professional development at Middlesex Community College in Massachusetts. I founded a group for instructional design and education on Facebook, which now has over 19,000 members. I have a group in LinkedIn called AI for Education, which I've, which I've been running for several years prior to the chat, GBT release, so it's long established. And my involvement with the OER community goes back over 12 years because a number of my, much of my early work at the college and other colleges had to do with open educational resources. But my particular area of interest has always been interactive open educational resources where I see a very pronounced tie in with AI technology. So great to see everyone. Great. So great to meet you both. We're going to check and just see if Lance was able to get back in. So give us just a moment. He gave us some great questions and a complete run of play for this event. So we will follow his lead. No matter what. Yes, sorry about that. Everybody was frozen. So did I get up my first question. This is classic. Lance, we're done. We did the whole. We just did the whole agenda where we've been, man. Awesome. Problem solved. That's right. We needed a Lance GPT. No. Did I get the first question out or are you guys getting ready? We're ready for you. You're the battle leader. We didn't hear that yet. Okay, cool. So the first question is kind of thinking back like way back to when chat GBT first came out and like unleashed that like generative a tsunami that we're all experiencing. And I know that was like 37 millennia ago. So what do you recall you were initially thinking about in in this conversation of generative AI in open open educational resources. So we're going to start off with Anna and love to hear your initial thoughts and just a bit about you as well. Sure. So, I actually got into this in June 2022. So about six months before chat GPT. And my initial thought was sort of that we are that it was great that I was doing OER because all my materials were public and there would be much easier synergy between AI and and what I had already built in my experience. So that I could use, you know, the open licensed materials to kind of fine tune GPT three at that time to kind of follow the format of my textbook to critique arguments. And I think so my sense was that we could have a bigger influence through AI because we're not behind a paywall. And that also that we're much more poised to move quickly in terms of coming up with instructional materials around AI and I literacy, and, and sharing those, and that we already have this ethos in OER of revise and remix and put it out there and share and it doesn't have to be the final end all version. So I thought we could really build on that ethos to respond more quickly as educators to AI in a collaborative way. So I was mostly excited at the beginning. And yeah, so I'll let I'll let Peter give his first response. That's a great lead in thanks. You know, as I pointed out, the possibilities of AI's impact specifically on OER are quite compelling. In fact, earlier today, a colleague was talking about using a cloud AI to combine and remix existing assignments in new ways. So people who have been doing OER work around remixing AI is a great tool for really coming up with innovative new models of existing materials. So right there is one particular application of AI to the OER work. The other is, and this goes back to my particular interest is the is the ability for AI to help generate interactive content in a much quicker way than we have in the past. I've always been rather bothered by the fact that the OER world tends to be dominated by static materials, much of which are stuff that we could have developed in the late 20th century. And there's very little of it that's very native to the digital environment. And part of the reason I think was understandable because creating quality interactive learning content takes time and expertise. And that was always a major assembly block. But with AI, using the right methodology, you can eliminate weeks of work. And by example, I would say that if I was creating you in a short learning simulation, if I wanted to do the first iteration of it, it would probably take me a couple of weeks to plan out the scenarios, the branches, the feedback to say nothing of finding the right images that were openly available. It was very time consuming. And that's certainly not my full time job. But using a reasonably robust AI tool, I can now create a working prototype within about a day if I'm disciplined. And that's an enormous leap in terms of the time. And I think the time issue was the biggest obstacle for creating a lot of quality interactive OER. And I think AI has now removed that wall. And I think we should make people aware that we can now do things that we've never done before. So that's to me is probably the most exciting part of the introduction of AI to OER. Awesome. And I hear in both of these this like that ability to do more, which is something we are excited about around OER. Because it's been, it does, like, we're all here because we are deeply in it and appreciate it and know that labor as you both pointed to in that time. So I think that's really, really great. Now we're here 16 months later. What are you thinking about this intersection? What are you thinking about how these two fit together in the educational landscape? I'm still excited. But I also have the concerns are more in the foreground and there's some things that we really haven't yet figured out. And, you know, the US Copyright Office and the legal system haven't figured out. And there's some real concerns around source attribution and transparency around AI. And then I think that we, it'd be great to come up with some more systematic and clear kind of procedures for those. But I've also been experimenting more as as Peter's talking about the interactive OER materials, I've built a couple of experimental GPTs sort of chatbot systems that are based or focused on my OER materials. So I think there's a lot of potential there for for sort of either tutoring systems or. Yeah, let's say tutoring systems that that build on the textbook materials, if they can point back to the textbook materials and appropriately give credit to that writer. And quote, right, so so it's sort of a work in progress in the bots that I've built, but I am seeing like a lot of potential I built a bot for my students to discuss how to identify assumptions based on my textbook chapter on that subject. And these what are called RAG systems where you upload a knowledge base kind of facilitate that. So, so I'm excited about that. And I just hope that we can, we can kind of tame these systems and get get them to reveal their sources a bit more consistently. Anna points to the relationship between AI and textbook material. And I think that's going to be an interesting nexus point because, you know, we've talked about the interactive textbook. Where you have the traditional reading experience, but then you go off to something else, which is much more interactive, and then allows the student to go back to the textbook. And I think that's going to be an interesting very interesting area to explore, either with chatbots, or with some sort of interactive to look a simulation where you read about a principle. And you go off and you practice that principle and then you go back to the textbook, and so on and so forth. So I think there's an entirely interesting new model here. My one concern is that is that, at least in the early stages, AI will be used too much to pull the cart of the old paradigm. And, you know, I gave a presentation where I had an AI jammer image of a race car pulling a wooden cart. And I said, that's kind of what I think we're doing right now. And there are inherent dangers of that model because it's because of the acceleration speed of the race car. And it's going to pull apart that they'll would encourage some degree. So, while we're seeing ways to use AI to accommodate our traditional learning paradigms, we should all be prepared for the fact that it's also going to help crumble some of them. We're going to be less dependent on some of our older modalities than we have in the past. And I think one of the gateways is precisely those tutor chatbots that Anna pointed out. And that's really the idea of every student having a one to one quality tutor has been a long been a sort of an optimal goal for educators. And I think, you know, that certainly is something. And I was, I was lucky to be an early tester of Khan Mingo and I was just really thrilled to see what it can do, particularly in regards to a well designed chatbot, not simply giving a student the answers and simply pushing the question back to the student. And for that, that to me is really important and addresses a lot of people's concerns. The next question is, how do we make a space for this kind of content in the OER world, which I said before is still dominated by largely static content. You know, do we add it to existing repositories, or do we create entirely new ones? I want to lead into that question about static versus interactive, because I think it's important and valuable. And I have pandemic thinking in mind, and not the pandemic itself but the representation of pandemic is a this big disruption. Because of some of the things I've been reading and it hasn't been extreme literature but recognizing that some of what we know about our infrastructural systems, including our information technology systems are not always stable. And so I want to tie that into this discussion of just a little bit of like, we're looking towards integrating AI more and interaction more, and to be able to create these things. And that challenge about OER is creating static materials. I just wonder if there's still a place for that, given that, well, we still know within this country, within the United States, I should be more specific. So there's 20% of people that don't have steady internet access. Globally, we know that's bigger. And we know that whatever the next war is going to be, it's going to be cyber. And so what does that mean for investing interactive systems without also initially creating for static systems or tangible systems that these can be printed out. So I'm going to, since Peter opened the door, I'm going to hand it over to Anna if she has thoughts on this or thinking about what this means within that OER and AI context. I mean, I just appreciate both perspectives and the tension between them and we've got to keep both things in mind. And, you know, to me, the more pressing concern is around the transparency. Because as I think Jim Julius was mentioning in the chat, there's, you know, these systems have been trained on copyrighted material, and they don't cite their sources. So Mahabali did this experiment where she asked chat GPT about to describe the characteristics of white supremacy culture. And it gave her this whole output that was taken directly from the work of Tema Okun. And she documented that and I replayed that that I tested that again, a couple months ago, and it's not solved. So if we're using these systems in any way to create something that we want to include in an OER, like we absolutely have to do diligence to see what's the original source. Is it giving us something that really should be attributed to a specific person or scholar? Because it's not going to tell us that. So that's that's really like my kind of pressing concern is that it, it runs right into these values around source attribution that are so important to us. And supposedly all the outputs are public domain, right, according to the US Copyright Office at the moment. And I don't think we can trust that will stay, you know, the ultimate ruling, but there's a conflict there because we can't put an open license on it. It's public domain, yet it might actually be copyrighted material that should be attributed. So any kind of program to use AI needs to recognize that and I think push for greater transparency about sources. Sorry, I went in a totally different direction and I kind of hijacked that, but that's what I'm really most concerned about here. We'll come back to that because I think you end on an interesting question. I want to circle to kid. Okay. Bringing up Lance's issue about the inevitable issue about the digital divide getting worse than it was before. I mean, equity issue. Absolutely true. But interestingly enough, in the developing world, the one technology that people even with limited means invest in is a phone. Which brings us the question is, are we developing OER materials that play well are usable on mobile devices. And that's always been, you know, as someone working in a community college and in their major city, I always, you can't help noticing students on their phones. When they're on public transportation, everything like that. It's the phone has always been very valuable real estate that we haven't fully explored on the educational space. And I think, you know, AI tools. You know, that play well on a mobile platform. And then they create interactive as well as static ones. Because right now we've had we've had static OER content which people can technically access via their phones, but I don't think they are. So, if we have, if we say try to creating some interactive content for students that's open. That was that was played well on a, you know, not terribly sophisticated phone, but also tied in to the quality, static content. That you're referring to and I'm referring to, I think that would be a win-win for everyone because then we get more of that static content. But it'd be tied into other things which would make it more compelling to the students using the mobile. And that's one approach because again, whenever I talk to people who are working in education materials and developing world, they say that's a lot of people don't have they just don't have computers. A lot of them people have phones because we've managed to make them very cheap. You know, so I think that that's a promising area of exploration. And what I appreciate about that that answer is both the better customizing the static, the static material for mobile technology I think that's the thing we're always grappling with navigating. And there's something with in what you're saying Peter that like brings in that UDL lens that like, like how do we how do we make sure certain like, we need some static things, and also layering on that interactive things. I think there's an interesting balance there. But I want to go back to Anna your concern right rightful concern and obviously right in the chat lots of folks are responding to this. You know, in what are some of the better, maybe even great. But what are some of the better models we're seeing for crediting those sources, do you have like of the the AI tools that you've played around with do you feel any or at least trying to get some of this right better than other tools. You don't have to label the like identify the tools that aren't doing well but like what are the tools that you're like okay they're at least like thinking about this and working through this. Yeah, I mean I think perplexity AI is headed in the right direction, although it's not perfect because they, they do at least put the sources at the top of the response, and they direct you more toward toward the sources now we don't know for how those sources are actually the only influence on the text that they generate. And they're not always. Yeah, so they're not always actually lined up with the output, but at least they're trying to give a place to sources. And I'd like to see more of that. As far as the transparency I mean I think it's sort of on us to be to be doing that at this point. And there are times when I don't feel as concerned about copyright infringement if I'm using it for example to come up with more generic template phrases for critiquing the AI output that's one of the uses that I that I put it to to add material into my textbook so I had a page of template phrases like the AI feedback sounds good but it's not really what I mean. And I thought well let me let me come up with some alternatives. I don't think any of those alternatives are, you know, due to one particular person. You know the prompt I gave it is a much more general one where it's going to draw on its whole corpus of training data. I'm not as concerned about that as like Mahabali's example with asking about white supremacy culture. So I think like it's kind of like we have to distinguish between the higher risk and the lower risk uses in terms of copyright infringement and lack of source citation. I'm not playing with perplexity AI and looking at how it works with sources because there is some transparency it'll tell you doing a search on these specific terms, and then it generates text based on those pages that it found. And again there could be more transparency about that but but that's a model that's going in the right direction I think. Obviously it's always a concern when you're generating any OER to cite the sources. In fact at my college we've been building in house tools that use AI. And one of them is an OER generator which specifically cites the sources of the information that it grows. I'll put it in the chat. This was created in conjunction with Devin Walton, my colleague who is an AI scientist trying to community college instructor. And it was a tool specifically intended to demonstrate that you can generate quality OER that does in fact cited sources. And it's also tied to use based upon the Claude AI tool and it goes compiles about 18 pages because it says specifically just give us what 18 pages of content related to a particular topic. But be careful to provide a work cited for everything that it alludes to. This was intended largely as a proof of condoms saying we don't necessarily have to wait for some company or organization to solve this problem for us. We can do it in house. I carry over the old edu-punk sentiment. Let's not wait for a corporation to fix our problems because they'll fix the problems in a way aligned with their value system. So I think one of the things about using AI tools is that we can harness it and then we give it the instructions about how it should behave as much as possible within the parameters. So I think this is an example where we can get AI generated, OER generated by AI on the right track. Let's solve every problem. No, but it does show that it is in fact a solved problem and it's something that we can do amongst ourselves rather than again wait for a vendor. And I'm seeing a discerning look on your face. I'm curious if you have responses to that. Well, yeah, I mean, I guess I'm just curious. You know, I think that sometimes it simulates providing sources and we really have to check to see if those sources are real and if those sources are really what informed its output. So sometimes it's kind of a veneer where it's actually just generated that content and then the hallucination problem. So I don't know. I guess I'm more cautious about using it to come up with textbook material. I would use it for little bits. I would use it for examples. I would use it for to extend what I've already done, maybe to create a first run of learning outcomes or page summaries. But for my textbook, I really lean into the sense that I want to use my expertise as an educator and in my subject matter. And I want to use the writing process to help me sort of deeply think through what the approach should be and how I want to speak to students. I don't want to give that up to to AI. I don't think that I would be that useful in me figuring out what approach and what words I want to present. So just to be devil's advocate on that I'm more on that side, though I would use it in certain ways. I think that's you're seeing these emerging two schools of thought around it, which is one is a very careful slow but others like let's try stuff and see if it breaks. You know, it's, again, I've talked about level one and level two approaches to using AI for for educational purposes and level one is is the more widely used approach and it's more cautious. So the tool that I pointed to, you know, we specifically were very careful to set in instructions and parameters so that it wouldn't just be creating stuff because the person who created was very, very much aware being an AI scientist about the problems of hallucination. I didn't want to introduce it to unless seriously addressed the issue of hallucination because it is possible to get AI to link to legitimate sources. So it requires a little bit more work than most people have been doing this again. I think we can all remember the famous case of the lawyers and bring up case studies and I hallucinated a bunch of case studies which then cited in court. I think the key going forward is getting AI tools to link to legitimize databases and sources and say, before you give me an answer, talk to your other technical buddy. So early on, when I was doing any kind of math related topic in chat tbt for, I would say, I don't want you running the math. I want you to talk to Wolfram alpha. And between the two of you work out the problem included the math with Wolfram alpha needs to be doing the math part, because I trusted will for mouth was designed to do much more significant calculations. And the issue is, rather than open ended AI is AI that is specifically linked to legitimize sources which can be properly cited. I want to do one more question in this realm and then there's somebody mentioned in my, my brain was also going to open pedagogy, but question that kind of as I'm thinking about this and hearing these different pieces. And I think this is really important, not just for like the AI but like we are as a whole and how we craft materials related to that and how we think about licensing. What level of sourcing is acceptable. And I think this is a interesting thing when we're thinking about AI which is as people noted in the data set you know it's pulling from its training set it's pulling from from rag approximate do we, how do we, or what is that appropriate level of how many sources do we need to know. And I asked this because I think about as we think about the comp, you know, compiling a textbook compiling these resources reporting out to students. How much do, how much do we want to add for that like how much do we want there because in my head I'm also imagining for some of these certain things. This could be a very like it could be its own 2035 50 pages. And is that just more data we're creating that nobody looks at so is there is there is there a minimum is there a maximum. I'm going to start with you again Anna, if you're, unless you want to pass to Peter first because I see the, see the wheels working. Sure, if you want to take that Peter I don't, I don't know exactly where the source threshold should be. I think I don't know either. You know, I keep coming back in my mind to late 90s and people talking about internet law and figuring out how do we use this tool without but and we're back in that same territory now we don't know yet. But someone's going to make a very lucrative career for getting these things out and make more money than three of us combined. I suspect what will happen is that someone will put forward the suggestion. This is the amount and then test drive it and see if everyone likes it and everyone's comfortable with it. And because we're obviously we're still in the period of experimentation. There are no guidelines, but someone's got at least like, like the first, like the first syllabi statement that you ever located and someone's got to pick the first step forward. And it may matter the right foot, but someone's going to put foot forward, figure it out, try it and then see who will follow. So, you know, there's going to be a quite experimentation and innovation and a little bit of risk taking. You know, there's no way forward other than that. Any thoughts from you and from you Anna or Well, this might be tangential, but when you started to talk about the creating reams and reams of text that nobody's going to read that really touches on one my other big fears and also, you know, area of some hope and excitement is just around. You know, we are discovery and how the proliferation of a huge amount of AI. Oh, we are text or almost a J. Oh, we are adjacent text, since it's not actually open licensed. What effect that might have on our ecosystem and it that it could be like the Amazon self published book kind of, you know, flood. So that it actually makes it even harder to find and vet the resources and see what's trustworthy and what's not and what's worthwhile and what's not in the OER landscape. So I think there's a real risk there, especially if we're not transparent about what's AI and what's not. But there's also the potential for systems like perplexity or AI's ability to summarize it for these things to help us with OER discovery and to help us sift through all that text. And I think we're just at the beginning of thinking about what AI systems might look like that would help us do that. And but I think it would be huge. There are so many OER textbooks for first year composition. And even though I'm like the English discipline lead for the OER initiative in California and I'm kind of helping people sort through that. It's quite overwhelming even to me to figure out which approach each book takes and to assess quality. And there are ways that the AI could help us with that if we're very closely watching how it's built. So I hope I hope we'll start to incorporate that I did some tests with perplexity on on finding materials and, you know, it's not quite there yet but there's some, there's some promise. And I just brought up a very important, she pointed out that there are an awful lot of OER textbooks on first year writing. And there are a lot of areas where OER has never gone, because there are courses that the people who are experts don't have the time and don't want to write the textbooks on it. So OER has been very expansive in certain areas but in other very important areas, no one has ventured out into it. So OER generated textbooks in certain domains. It may open up a whole new area that where no one is willing to write the textbook, but you might get some humans willing to vet the textbook. You know, so I think, you know, there's, you know, again, so it can open up really important areas because I know they're my own college. OER has got a firm foothold in certain disciplines and in other disciplines it's Terra Incognita and they said there's nothing out there. Like you keep talking about OER but there's nothing out there. So what am I supposed to do? I need to make sure that my students have quality information. So why do you keep coming back to me about OER? How do we solve that problem? Well, I think AI could be part of the solution. You know, you can get, for example, an AI to go through a database of information on cybersecurity and then setting careful parameters and sources saying generate a text this long, which highlights these things. Make sure there are clear learning objectives, create assessments for it. You know, tease out themes for students in summaries, you know. AI as the editorial assistant, the OER community has always needed, may be a really good way to go. Because again, it's made a lot of venue in the humanities, but it's making virtually no impact this time area. And, you know, at least in my experience, let me pull back. There are some of the areas where it hasn't made area, but others, particularly in emerging fields and disciplines. There's like, I know there's OER textbooks for things like physics and chemistry and an open stacks. But as we see new fields, there aren't established authors and the people doing that work don't have the time to do it. So I would say that that's an interesting possibility. Awesome. Thank you. So there's a couple of things that came up there. I'm going to just into the chat. I'm sharing two things. One, it's the second link is an article. It's behind it. I'm pretty sure it's behind a paywall. I'm sure all of us are grumbling about that. The first piece is where I actually use generative AI to do a summary of it in the article is a boat. This is I think something you hit upon Anna is, you know, what about using how can we leverage something like AI to better organize, figure out work through open AI open educational resources. And in that particular article that's talking about well what if we're what if we're able to use AI as a to build the taxonomy, because this is the thing we also know with with OER is it's scattered across lots of different places and platforms and different repositories and that itself for faculty who are looking to adapt looking to adopt or adapt becomes part of the problem is just to find. And I think that also intersects with like that question of, I think a lot of faculty are interesting and adapting, but there's a finding it and then adapting it you know finding it can take a while. So just thinking about those ideas combined. But I want to spend a few minutes before we start to jump into some of the other questions that are that are in the chat. What do you think about this like what does AI open or challenge or enlightened about or make us think differently around open pedagogy and how that plays out in courses. I was really influenced here by learning from Lance and from Mahavelli when we worked on a paper on open educational practices and how they can help us respond to AI. And that, I think this is a moment for, you know that moment of saying to students, we don't have all the answers. It's a very natural moment to do that, and to invite them to collaborate with us in figuring out the differences around teaching AI literacy and working with AI and citing AI and where where the boundaries should be. And so, I think, you know, there's a lot of work that's that's been done around, you know, as Lance has given a model of working with students to come up with policies around academic integrity and AI. You know, there's also a lot of exploration of, you know, maybe, maybe AI can facilitate open pedagogy because it could help students generate materials like say examples of a concept that relate to something they're interested in right it could also be personalized to them. And so, yeah, I think there's a lot of potential there on a lot of different levels to collaborate with students. And I think it is really exciting to say like, look, you know, I'm, I'm not the prompt engineering expert here to teach you like we can experiment with this together. And that builds on the whole ethos of open pedagogy. There's not much I can add there because I'm not really now at home it's like, you know, we're saying to students, this is new territory for us, we're not we have all the answers. So we need to move forward and it wouldn't be great if we did it with you rather than to you. So, you know, and going back to the issue of, you know, is there quality content and everything. If a student identifies a topic area where there isn't a lot of we can work with them to build some sort of we are resource and then concern then talk with them about. Idea attribution and copyright and things like that and bring them into the bring them into our world and are concerned about because obviously an academic one of things is what is what is knowledge what is the source of the knowledge how do they get there. So if we're constructing or we are text with students and using a tools, it's an opportunity to bring them into these conversations that we're having about attribution about you know who you know what is what does it mean to create. What does it mean to remix. And the other thing to you know, for example, you know, there's been a bit concerned whether these assignments are culturally inclusive, you know that's a and people said maybe not but I don't really have the time to do that. So one of the we are tools that we built at my school which was to remix an existing assignment and align it with cultural knowledge from a specific area. And it was not going to be a perfect output but it'll pull on some sources way to the culture of that area. It produces a rough draft, which then can be surveyed by the faculty member and the students who come from the culture and say okay, how much of this is accurate, how much of it inaccurate. And then together they refine it, and then the assignment goes forth. So I think there's a lot of potential there as as the AI as an assistant to both the faculty and the students in producing open pedagogical work. And I just piggyback on that. Sure. It's just exciting because I think what Peter's describing is students not just working on the subject matter but building a kind of critical AI literacy. So if they're looking at, you know, here's how this is customized to my culture, but actually maybe that's somewhat stereotypical and and really it should be this. And then they would be talking about the data sets and the training data that it's that it's basing that on and how those data sets might not include enough data from their own culture. And so and I think that even even in working with AI outputs and deciding whether they are accurate students are building that that practice of pushing back against the AI and being skeptical of it. Now they would need some help from the instructor because they don't yet have the expertise necessarily to evaluate the accuracy. But they might get somewhere they might be able to find some problems with it and that builds the critical literacy. Thank you both so much. So what you're seeing on the screen and this will also be shared out is these are just like starting to, they're there in the chat. This is a identifying some articles research that has started to come out that focuses around this question of open and open education and AI. Again, some of those are unfortunately behind paywalls which with anything published in open education still baffles my mind but I get I get the politics and the challenges of that. The article that I shared that is me, my and and is is from the Journal of applied learning applied learning and teaching and that is open access. So I'm going to take a switch now to the questions from the chat. I'm Eric, which is what, what is your opinion about creating new OER with GPT techniques and maybe replacing experts reviewed resources to using GPT tools, how do you, what are your thoughts and feels on that. Let's go with Anna, do you want to start us off. I'm sorry, can you repeat the question or let Peter go first. I'm blanking. My answer would be, I don't know. That's honestly. The idea of using the AI to review the quality of it implies that the AI tool that you're using has the expertise. And if we don't think it does, then then we shouldn't use it. I know that there's been some conversations in the medical field about using AI tools to speed up the peer review process. I think right now a cyborg approach is the best way. There's a lot of content which AI can review for factual information if it's been properly trained, which will then open up energy for the human resource reviewers do other things better. So I don't, we have to avoid an either or is it the human or AI? AI is a tool. Humans should be using it. How we use it is really the question and when we should use it. And they're going to be times when we don't use it. I mean, even if you love AI, they're going to say, this is not, I don't need it. Like, you know, I don't know about you, but I get really irritated when the AI start trying to write my memos for me. This is the phrase you want to use, like, no, no, no, I will ask your opinion when I need it. Sometimes it can be a little intrusive and you need to shush it away. But that's, that's matter. It's, it's, it's again, it goes back to the idea that we, we, you know, human and machine. And the question is, what's the balance and want to use it. And that goes back to conversations for students about when should we use technology and when should we not use technology. And that's a larger conversation beyond just AI. You know, when I started out in the teaching of English 30 years ago, one of the, one of the scholars who was my touch points was Neil postman with his concern about language, education and technology, and about how the critical adoption of technology can blind us to the long term problems because we're so dazzled by the short term benefits. And with postman, he was concerned predominantly because of television. And I always thought that if postman had lived into this era, he would be in hell because the problems have only amplified, which requires even more critical literacy around how and when to use technology and any literacy about using technology is also about when not to use it. So there are so many conversations that we should be having with students. And I think the most forward thinking instructors, of which I would include Anna as one of them have already begun these conversations. And so I'm very, I'm very at ease as to where her students are going to go. I'm more concerned with the 9 million other classrooms where these conversations are not happening. Because, you know, I go into commerce is about AI. And I hear questions from people, and I realized that colleges really haven't done anything yet. They're still sort of paralyzed. And like, you have to move forward. And don't, there's going to be no roadmap, you're going to stumble. But the first thing is you have to step forward and have conversations, identify who's using AI, how they're using it, and then generate a conversation around it. And the funny thing is, people who are in this sort of meeting are people who are probably already involved in it. There are a lot of people who should be here but who aren't. And they're the ones that I'm concerned about. Yeah. And I appreciate the focus on sort of the places it falls short and and helping students see that. And I'm seeing this stuff in the chat about about essay feedback and tutoring and the concern that maybe that tutor is not going to give good advice or it's not going to resonate with the student or it's going to be, it's going to overstep and the student will think it has more authority than it really should. And so I think, you know, maybe in how we approach it with students we can sort of build in this, this critical approach giving them practice. You know, taking it down a notch and trusting their own voice and their own purpose. And, and we can build that in to the experience, even if our end goal is that they use AI all the time and they love it right they still have to be critical of it to do that well. And so building that the students confidence and their ability to speak back and push back and edit and see the flaws, I think is is essential. And that's part of this kind of co intelligence paradigm that Ethan Malek is pushing with his new book the Wharton Business School Professor. Thinking about, you know, when do we use it and when do we not and how, how can it push our thinking rather than taking over and replacing our thinking. Right. And in the chat, Mark Wilson pointed out one of the classics Douglas Engelbart's work from 62 about augmenting human intelligence and I think that's really the key word with AI is it should augment human activity not replace it. It's like an Ironman armor, you know, there's still a person inside there make call calling the shots. I had to make sure I was unmuted. So this is great. This is really rich discussion and there's, there continues to be a lot of questions in the chat. I think I want to, I don't, we won't get to them all because we have less than eight minutes left so I do want to lean, lean towards this in foot you've hinted at it. I'm wondering about a little bit more tangible like you are that person that is paralyzed or the like but particularly for the space of we are in AI. What advice do you have for faculty or staff trying to figure out if using generative AI makes sense for creating we are because I think we saw a lot of these types of questions in the chat around like, well what if, like, if, like, AI is pulling on copyrighted material to create its output, you know that output is in the public domain it's not decided legally, like it hasn't, you know, we don't know that that's firm. What advice would you give to faculty and staff trying to lean into this or start to play around with this. Well, I just found and it's in the links from the slides. There's a BC open campus page on using generative AI for we are and they have a really nice little flow chart for that decision process. And I would just add to it the question about you know do you, how important is your expertise to how you want to frame this thing that you're that you're writing. But they ask questions like, does it matter if the output is true. Do you have the expertise to verify the output. Are you willing to take full responsibility for missed inaccuracies. And those are kind of their three main main questions. And then I would add, you know, are you, are you able to figure out what the true sources are of this you know are you able to sort of do some research to see if you're putting out material that should be cited. So those are kind of the caveats but I also think that like Ethan Molek recommends spending 10 hours with with an advanced model just to kind of build your sense of what it could do and what it might be useful for. And I think there are a lot of ways it could be useful in supporting we are that are not actually generating text you're going to paste into the the we are. We're experimenting with that with having it ask you questions about what you want to produce with having it review drafts. Think about, you know, give counter arguments sort of to to boost your own thinking. It is definitely worth doing and then as you get more familiar with what it can do and around your subject material and where the weaknesses are you may start to get a better sense of how you want to use it. One of my hats is as director professional development and my college we basically got a small mini grant and paid for professional level licenses for about 10 faculty member predominantly from the English department because they were they're leading the questions about it. And just we have them chat GPT for accounts, and then just said, go play with it. This is a sandbox experience. We have no guidebook. You're smart people. You have questions. Here's the tool. Go back, go play with it, talk to one another, and then come back and tell us what you learned. And if you need additional support, like, you know, the training on designing prompts will do that. But the best thing we can do is give your space to experiment, like a laboratory and then let you play with it. I think that's the way it should be. I mean, I'm, I always come back to, you know, the, the, the, the primary form of learning for humans is play. And you have to give people an opportunity to play with things and learn how to use them and how not to use them. And, you know, we did that with the engine. I mean, remember the panic about Wikipedia. It was going to destroy the world. You know, epistemology was going to collapse and we figured it out. You know, we'll figure this is not our first rodeo. We'll figure it out. But you have to give people the space to experiment with it and breathing space and, you know, obviously bringing students into it. As part of my initial, you know, the model for this grant, I said, if you have a couple of students working with you, I'll get them full accounts. I'll have them play with it and then come back as part of the participants in the study and say, from our perspective, I used it, you know, I said, I want to help people get these tools. I say, oh, I only one requirement. I want to smell burning rubber. I want to know that you just drove this thing around and you almost broke it. And then I want you to tell me what you learned. And I think the same thing with the students, I think, you know, because I trust the students will come back and say, we use it this way. We try to use it for legitimately. We kind of experiment with the cheating model, you know, what not. And this is what we learned, you know. So the exploratory phase, I think, for any institution is really the first step going forward. I think you have to do that. I do think getting full licenses, the more powerful tools is an essential step. You can only tell people so much to go play with Bing or go play. I don't even drive people to chat GPT 3.5 anymore, because there are too many flaws. That's a teaser tool. That's not something I would use, you know, to do anything really vigorous. And what worries me is, because if it's limitations, people's conception of what these tools can be can often be damaged. So I say, if you want to learn about them, please come to me and I'll help you get the more advanced things. So you develop a better conceptual model. And then just read what other people are doing. People who are forward thinking like Ethan Malik, who has a new book out called Cointelligence about using AI tools, which I just purchased copies for faculty and staff. You know, again, on a book club, get people to get them doing things. You know, I always go back to Herb Simon, people learn when they do things. And the only thing you can do to help them is to influence what they're doing. So often give them the resources, give them space, give them the time. And then say, come back and tell me what you learned. So it's really a discovery period. Thank you so much, Peter and Anna. There's one last question I'm going to just respond to as quickly as I can, although like that we're at the end of this is this is so unchallenged. To Jonathan point, poor to this question, you know, how you think about whether it's it's worth the use of this tool despite it's various downsides. Generative AI has a has lots of downsides and it is, you know, it's problematic it's challenging. I don't, they're like where the answer that I there's two answers to this that I think are worth considering one is. Yes, it has all of these problems. And we should be aware and we should be engaged with faculty and students about how we think about these tools that's part of, you know, the critical AI literacy, the critical literacy. And also a reasonable discussion to be said around. Yes, and there's so many other technologies us being here on zoom is also having various deleterious effects on the world at large, and I'm trying to say, you know, those are equal, but there's ways we have we're all accepting the damage that we're doing and the we do that critically or uncritically in given the amount that this is showing up in world, you know, throughout the world and industry. We still need to engage with it in some way for our students to be prepared because businesses are going to ask this so why not working with students to make sure that they have the language to challenge it. And then finally, there's something about this that is worth thinking about is the better we can use the tools, the better we understand them, the less resources we consume. Right, so if we are if we are really working to help students fully understand how to maximize their prompts and their uses, it's really important. So can we give a round of applause to Peter and Anna for their thoughts and contributions their ideas, making this a really lively conversation and then I will pass it back over to Heather, because we are just we are over time. And thank you Lance, and thank you Heather as well. Yeah, thank you. Thank you everyone so much. Lance, Anna and Peter. We've really enjoyed this discussion and to everybody in attendance. Thank you for coming. We will make sure to post the recording as well as the slide deck on our webpage and we'll also let you know by email. I sort of scan through our closing slides and Liz put the put the links in the chat. And I hope you will check us out and come back and be a part of future webinars. So thank you so much for joining us today.