 All righty so hi everybody good morning or afternoon depending where you are thanks so much for joining for liquid margins 45 AI and the future of learning trends challenges and opportunities. We're super excited to have you all here, especially following up on our liquid margins episode about a year ago just as AI really started to make it to the forefront of technology and especially in the classroom with a lot of things we're hearing about students these days so we thought it would be a great time to bring back some of our panelists to talk about what we've learned over the last 12 months and how we see AI in the classroom and the future of learning. Before we dive into that just a couple things for housekeeping. First, if you have a question we would love to hear it. We have utilized the q amp a section and that's available in the bottom of your navigation bar. I'm going to be answering your panelists and I will be answering your questions and responding to other questions just make sure you drop it in there as opposed to chat which might be something that you're used to. Secondly, if you want to see transcripts or closed captioning, you can simply enable that via the closed caption icon in your zoom menu that should be located at the bottom of the screen. You know, we're back in the swing of things this year so we've got liquid margins 46 coming in just a few weeks, and then we'll focus on boosting grades retention and engagement using social annotation. That'll be on Tuesday, March fifth so you want to make sure you can register for that as well. I'm super excited to be here. I'm Joe Ferraro I lead the commercial teams here at hypothesis and we had an amazing session last year so we wanted to bring back three really amazing panelists. I just want to welcome Dr Nick La Lordo from the University of Oklahoma. Then we've got Rachel Regalino from SUNY New Pals, and Joel glad from the College of Western Idaho. So hey everybody thanks so much for joining us again today. I thought, you know, really rather than driving the conversation let's just dive in I mean it's been about a year since we all got together and AI in the classroom was pretty nascent. I mean, I think it was sometime around Thanksgiving in 2022 and chat GPT really became a four letter word for a lot of schools with a few more letters than that and people have been trying to figure out what the best way to adapt and our students going to use it and what is the utility purpose of this but we had a lot of predictions then. What have we seen since we all asked that together. Yeah, I just going to jump in really quickly to say that my my big thing was I offered a winter session course in AI applications and primarily education and business, and it was probably the best course I've ever teach and it taught in a while and it was filled with future teachers who experimented with generative AI programs and it was a blast. So that's my big update. That's amazing. And so what were some of the things that you learned from that course. I learned that the actually, you know, there's been a lot of resistance on the student end of it and once I told them that it was okay that we were going to have fun with this. There was a lot of creativity, you know, using image generation along with text generation but then there's that reflective part about it right so where you reflect on what you've done and how you did it and how the tools worked. And it was really just so much fun. And so you said the students had some resistance to using it were they afraid they'd get in trouble or. I, and they still are scolding me. I mean, this semester, I say look at me and it's like, Tisk, Tisk, what are you doing and I, you know, AI, I'll talk about later is kind of like my teaching assistant and they're kind of distilled disapproving. I don't hear that as often as I hear the other way that students are meaning really heavily on AI. So you had a good cohort. Yeah, that's interesting. I do see that and that's something I can speak to later as well that there has been it's an interesting dynamic so like you know when I think one of my predictions is that it's going to AI is going to roll out in kind of a dialectical fashion and throughout a higher education, meaning it's not going to be just like okay here's the future of higher ed and we're all going to like eventually adopt in this way I think there's going to be a variety of like you know various forms of resistance and alternative teaching in response to this. And I do see that in some students say, you know, since last year, I think as students become more familiar with the tools, they're learning on their own but also through through others that, you know, it's going to be dangerous, these tools can be different with learning, etc, etc. And they come into the classroom with, you know, preconceptions about whether or not they should, or shouldn't use AI. So something that we've that, and I don't like I want to make sure that one of the things that I continue to do is provide a space for a range of responses to tools like jet to BT and not assume. That that okay we all need to approach in this way. Because I realize it's still it's still so new. So since so what you know what one solution or one one response to this and I think one of my big updates over the past year has been to work on a training for for students that's tailored towards the kind of students that I teach in my institution. And also, you know, keeping in mind that they're familiar, they're familiarity with or lack of familiarity with these tools. And so, along with lies along another by another faculty many faculty member at my institution. We developed an AI training I can link to that in the chat. We have two versions of this. And we developed this over the summer, and have been trying to update it. How would I, where would I drop this. Oh yeah chat just put them in chat. Okay. So, this is the. We are. And then the HP HP. The HP is more is more recently updated. And so, one of the things we're trying to do here is not assume that faculty and or students have familiarity with the tools and so if, if, if I want to roll these out, or if I want to teach with them in any fashion. I want to know the architecture, like how they operate. And then just basics like what is what is an input. Like where do you, what does it mean to input something. What is the prompts, what is the context window. All these things. Very savvy users of chat to be T understands, but if you start to teach with these tools you quickly find out there's why disparities. And the level of knowledge. So the training. And then from there, figuring out, you know, the writing classes that I teach. What are the use cases are because it's been different in each, in each type of class writing courses, literature courses, and first year experience courses is the main categories I teach. And I've had to figure out where AI fits into each of those roles. So I can stop there I've been talking a lot. I can say more but I'm going to pause for now. I jump in then because I have some some similar experience maybe with some differences. And I teach in a small kind of, I called it almost a boutique freshman writing program. So first year students writing primarily about literature, you know, and the broader more traditional definition of that term. From that perspective, what I've been working with and struggling with is how to think about chat GPT as as a humanist, which is to say, if I put on another hat and if I'm doing writing across the curriculum work. I'm with business community or construction engineering or whatever that they're interested in output they're interested in writing that solves particular tasks. And that's a very different perspective than let's say a first year writing course, where we're thinking about writing from the building of student identity civic identity personal identity, all of that, you know, perhaps corny but fundamentally really important stuff. And so, in a way, I have to almost have individual and I have the privilege to be able to talk individually to students, you know, in the context of required one on one essay revision conferences. And so what I found myself doing a lot over the past year is persuading a student that that for this particular assignment, let's say a manifesto representing the particular perspective of a group. So your generation in the context of a writing seminar entitled generation gaps. Why would you want to use a robot to speak for the particular problems that your, let's say affinity group that your peers, the members of your identity or facing. You can tell a robot as we all know now to role play that and it'll produce something polished and more or less hollow sounding, depending on how good a prompter you are. But why would you want to do that. So, I think a lot of these one on one conversations have helped students think about the extent to which writing is a workplace tool and the extent to which it's more of an, you know, an exploratory tool that humans use, you know, to cultivate being human. So when you explain that to students, how do they, how do they take it because I mean that's actually a question I've thought of why would I ask a robot what real people would think. I think partly students ask a robot because of the transition from high school to university because of trends in in high school level pedagogy that often involve writing about set passages extracts from texts in very realistic skills oriented ways so that I don't think the high school curriculum, you know, as I see it through the refracted perspective of what my freshman tell me, you know, and the literature I look at. I don't think the high school curriculum actually necessarily positions freshman to come into university and trust an instructor who's going to ask them to write in an exploratory reflective fashion. And I think building that trust is something that one has to work on hopefully in institutional circumstances, you know, that make it plausible. Okay, and so I think if I'm hearing you when you're asking students to write. That's something people deem a bit more personal than maybe some of the other examples we see for a I think we saw the headlines last year student. Chat GPT is crushing it when it comes to the bar exams or to the SATs or to even science exams compared to the average but that's not helping students really learn what they need to learn is that right. No, I think I think that's right and even before chat GPT. I'll give an example of grading, you know, 1000 AP English exams and learning many facts about poetic form structure, etc. But none of those 1000 essays, gave me as a reader a reason to read them, and very few of those essays expressed anything. I would call a sense of a motive compelling that student to write. So, yes, I mean I think ultimately and this is, I'm going to stop here because I'm certainly transitioning into a different, a different conversation. The relationship between process and product is sort of what's what's looming here and allowing students to trust writing process more in the context of an educational system that maybe hasn't been emphasizing that so much. Okay, and I know we've got a question from the audience that says, you know, does AI actually impact student ideas for creativity, originality and authenticity. If you're reading 1000 essays about the same topic and I don't have a compelling to read, I guess we know what the answer is going to be authenticity is a word that that in my experience first year students use and understand it's a key term for them. And it intersects with discourses like the whole conversation about influencers. So, there's a notion of writing authentically and there's also a notion that that you can fake that or you can perform that authenticity very compellingly. So, I mean it's a fraud. It's a fraud topic. I think that creativity is asked to impersonate a particular subject position, or if other LLMs asked to do it, you know, one can produce plausible results. But I also think that if you pose that that option to students that does strike them as wrong still on a fundamental level. And I think I think that's a worthwhile intuition, you know to work with. I think that I think one of my if I think if I predicted last year that my hand that students would increasingly be using these tools to do the work, I have seen it a slight percentage and increase. And that happening still as students become familiar with chat to BT Gemini quad less so but the understand it's out there as they become aware of these things I think it's showing up in Snapchat now as well like it's everywhere. Like every conceivable, you know, what about that that students are using. There's a slight uptake and how much they're using it to complete the work but it's not as much as I expected, and I'm seeing students eager to learn. I shouldn't have been I should not have been so cynical, but I can tell like in online courses and face to face modalities that students do want to learn, they do want to preserve their creativity and voice and they're trying to figure it out like how do I use these tools they're so getting, and I'm getting such good results. Well also, learning how to think for myself, right so it's this is not, you know, this is not a zero some game and I don't think that has to be the proposal here. And so something like an AI band I think does see it as heroes on in the learning process and I don't think that's the case and I think our students are finding out that's not the case right. However, we are getting research results. What kinds of implementation can foster learning and then can interfere with learning. And so this is something I'm trying to, I'm trying to be aware of and trying to stay up on. I saw a study the other day that that showed the impact of AI assisted peer feedback in a writing course. And the impact that the different kinds of, you know, assistance had on what what whether or not students were more capable later on in the course and doing the same operation. And what they showed the study shows is that if students were given were given too much AI assistance early on when introducing new concepts related to peer feedback, they actually perform worse. Even though sorry, even though that they even though they gave high quality feedback or feedback that was actually reading is very high by their by their peers later on when the when that assistance went away. Their feedback was rated more poorly than the students who received less assistance or no assistance, meaning it seems like students if they receive assistance, too much assistance early on when practicing new skills and habits. It might actually you know might and this is probably intuitive but it can interfere with learning something right. And so one strategy here is just simply to make sure that students are forced to make choices when it matters. And then, and then we can see AI assistance is coming later to reinforce certain choices or students understand which choices need to be made and they can evaluate and scrutinize generated results based on something they've already learned that can be one option, but but the instructor can also just needs to determine in each course, what choices do I really care about. And who I want to see and evaluate or sorry, see an assess versus what choices are more tangential or less important, and I'm fine if they're outsourced to machine, and I think students need to need to get, you know, need to figure this out. And I think that I mean faculty like we do this, or a number of faculty know and include myself we know ourselves like these are the choices I want to make throughout the day. These are the choices I don't care about. And I'm perfectly comfortable outsourcing that to a machine. Right. Yeah, we have quite a few comments that say you know the first level of concern is the AI has become going to become a substitute for the reading. And says his institution actually says to use AI to make sense of long pages and other documents as opposed to doing the reading. And I think, what do you all think of that, actually, you're the experts, I don't think anything. Okay, well, so really quickly to say and I shared this with my students I've got a plug in I forget what the name of it is. It summarizes, you know, articles that I'm reading. And I tell my students that indeed I use it constantly and it the bullet points then let me know whether I'm actually going to delve into something is there's almost acting like an abstract, right. And then we segue into talking about scholarly journal articles and reading that abstract. And I see very little difference with that but I think in terms of it is our fear right that that instead of actually doing the reading that students will just type in a prompt but I mean that also has been around. Before chat GBT, you know, the typing things into to Google just to get some kind of standard response on a study site. And I'm very interested to know what other people are thinking about that. Yeah, I mean I have a similar on on that in the sense that the production of summaries. That may be something that you can you can outsource to an LLM. But again, then you have to at some point the summaries themselves have to be evaluated. And they enter into a conversation, right they enter into the, you know the scholarly conversation. And so, at some point still reading reading has to happen. And, you know, this is this is certainly one place where the positive benefits of web annotation might start start to emerge and I know we're going to get there. But so I guess I guess what I would say is I might ask students I have asked students to use an LLM to translate a paragraph of a historical source into more contemporary language. And then I'll have students in groups, look at that translated version, try to make an argument about what what nuances of meaning or what what syntactical, you know, complexity doesn't get translated. So, not not doing the reading is always going to be a problem, but I think I think that LLMs are only one factor pushing in that direction. Rachel you alluded to this I, if I remember, you know there was the long term conversation about cell phone technology and, you know, curricular developments there are a lot of different forces, making students less capable at reading extended complex pros of any kind. So this is, I can speak to this in one of my courses I can try to be specific here and actually this fits into this is hypothesis is hosting this and I use, I use hypothesis to help with this question about if we should trust, you know, even things like bullet points and summaries from that are that are generated. And so, in fall of 2023. I built a number of AI assignments, or yeah, the science that evolved using AI into throughout the course, and some of them used you know mixed blended social annotation others just you know discussion forums and that kind of stuff. So in some cases, we, I would like, this was asynchronous so it's hard to like, you know, improvise on the fly. But, but this goes to show this can be done asynchronously right so in some cases I would have a conversation with such a PT was GP GP T for the paid for paid for model. So I would have a conversation and ask her number, ask her, try to get it to do something that was for the course. And I would kind of force the model to hallucinate right to kind of like, you know, generate generate results that weren't that were present in the context. Like I was asking, I think I asked it to provide key quotes that compare, you know, got the elements from, from to Edgar Allen pose short stories, I would upload the stories. Now say hey, can you identify got the elements. Okay, now can you provide key quotes now can you turn this into. No, now can you read this scholarly article, can you explain on the scholarly article can help make it arguments about got the elements in these two stories. Now can you outline an essay now can you draft the essay. So kind of like similar processes and student would, but using the using the machine right and then I would, I would so I would force the hallucination because I knew as soon as I asked it to provide key quotes. And from the post stories it was going to it was going to mess up something and and I think like 60% of the quotes were were hallucinated. Of course now it's getting better that that's less likely to happen, but you still it's still pretty frequent and it's something to look out for. And then, and then I gave them the I use hypothesis. I fed it the this was embedded within within our elements. And so I fed it the conversation. And I had students comment on the chat to be on my conversation with chat to be to chat to be tea with the, with the question, you know what are the limitations of using this tool to help complete this research based assignment that we're supposed to be be completing by the end of the semester. And I wanted them to look for limitations due to the architecture of chat to be tea so for example like look for hallucination look for bias look for lack of, you know, original thinking or lack of voice what does that mean can you can you put your finger on exactly what that means and so they have a discussion with the conversation in order to figure the stuff out right. And so in response to one of the questions I'm seeing the chat here like can it foster critical thinking I'm like well this is an example of where I think it does where I think it did having a conversation with the machine can be highly productive, depending on how it's implemented. Yeah, so I'm really impressed y'all with what you're doing they're taking it to a whole other level the most I do is bring it in we'll we'll punch it in and I'll have them look at it stylistically. And getting back to our earlier point about authenticity. I think that's going to be the coin of the realm right that whoever yeah that students don't like, or they say they don't like inauthentic things right. And it's going to be yeah who can bring that personal touch but going and going back to the reading and I'm just going to put this out there that I have done away completely with discussion board at this point. I got tired in the, but what is it now we're in the last spring just occasionally just getting things that have been generated clearly by chat GBT and and I just moved completely to social annotation. And it's just been so much better to just ditch discussion board, but that's my little. Yeah. So, and I also noticed one that one of the comments in the chat about students like, you know, quote unquote cheating a lot to help with reading. Just kind of skimming through, you know, instead of reading the stuff just kind of asking for some reason I think this was in a lit course. But again, when I use hypothesis in the LMS. I found that students like really did engage with the primary text, more than they otherwise would have. I, you know, I think just, I mean it's kind of a simple design tweak right I mean if if they need to be very specific and they annotate like you know a string of text and someone else is responding to that annotation, they're less likely to outsource that. Right. It just I mean some more it's a more engaging way to instead of a read this and then go to the discussion board and write a paragraph that's very very easy to just copy paste in the chat to BT. Yeah, it makes the sense. It enables the sense that authors are real, you know that authors have have conversations across history about about issues, and that you can have a conversation in the margins of this of this text. And, and I think when it's working well in that sense. Hypothesis gives me in the classroom or gives us in the classroom, you know something like like the best of both worlds, you know some of the passion of those free willing conversations that can happen in the classroom. But at the same time, offering a gravitational force that keeps those conversations from digressing or at least keeps returning them to the text eventually. You know, not every set of annotations is going to be successful and the way I describe it but I do think the way it helps teach writing as a kind of ongoing historical conversation, you know this is the famous metaphor from from the work of the cocktail party that that all sort of rhetoric and composition people know the unending conversation about ideas which is of course the, in some respect, you know, an idealized and problematic fantasy, but again, a hypothesis, even the intelligence, helps students identify themselves as as writers, like this writer they are reading it helps you talk about the authors of the texts. They don't they don't say things like it. Excuse me I knew this would happen they don't say things like you know it says they're in the book they realize that they're arguing with with authors, you know with human beings and I think that's another you know related benefit of hypothesis and web annotation. And so if you all had to really adjust your, how you're assigning annotations in the wake of AI or this. And with the discussion board piece that you mentioned a few minutes ago Rachel like, I feel like if you scroll through most discussion boards every students just agreeing with what the student before said, and if you're feeding a prompt that's hallucinating quotes the whole class is now agreeing on things that never happened and so it's finding different ways to actually get these students to engage, because you have to change your prompts at all. When you started to think about annotations knowing that they could sort of talk to the machine. I did not, you know, I really haven't changed too much I always try to be someone creative though I have them, they, for example, very easy thing, you know share, at least in every discussion of whatever short piece of fiction we're looking at share related resource, and they can be a video it can be this or that the other thing and that just kind of livens it up from the beginning. I think also in terms of annotations keeping them because I have a, you know, a class of 25 I keep them in groups of five or six so they kind of get to know each other. And I think, you know, over the course of a couple of modules they really become a group and then I do shift the groups around. But I think that it just leads to and I getting back to this idea of a real conversation that it, you know, but discussion board in the beginning seemed like it was promising. They could possibly be a real conversation I think then we all realize about, you know, 10 years and or five years and it wasn't going to. And this does capture it it you know just tweak it a little bit as I say I find smaller groups rather than this big all class group for example fosters that sense of community for example. Yeah, I have had to change. I guess let me speak to like how I change some assignments. I've already explained how I use social annotation to help think correctly about generative generative AI. But then also separately and separate assignments where where student I'm asking students to use these tools. And, and when I ask, by the way, I do allow students to opt out and complete an alternative assessment. So I changed some at some point in the future but right now I realize there's still like a heterogeneity of responses to this and I wouldn't allow for some students to just say hey, not for me, you know, I actually do not want to use these tools. You know, I don't ask them to opt out of like, you know, using the internet. Obviously they can't complete the class, you know, without doing that. But for now, I think it's okay for me to allow students to opt out and I think I get one out of a section, you know, maybe maybe once in a semester ops out. And so in these situations where students are asked to use Chatchabee tea or another AI tool to, to experiment to practice an outcome in the course. I do try to use that, you know, I try to keep in mind. I don't want to, I don't want to interfere with students learning and I don't want to interfere with student agency. And so the way I tried to do this is I asked them to first attempt something by themselves, usually to demonstrate like proficiency proficiency or familiarity familiarity with the content, or I want them to be able to summarize the text themselves. And you could say, well, you know, students might go ahead and cheat and just generates, you know, the summary, when they went that that's supposed to be theirs. But I think most of the time, students actually want to learn here. And if they know they're going to be generating the summary next for the machine. They'll see the point they understand, oh, hey, it matters. The instructor really does value my thinking at this stage in the process. So I'm actually going to try to read the text and summarize and then knowing that I can generate a summary next I think when we make it explicit, and not just kind of pretend that students are not using tools behind the scenes. And we might actually foster more thinking in this way. So I asked students to do it unassisted. Try this out. And then I asked them to get a assistance in some fashion. And then there's a reflective piece. Afterwards, like, hey, how did you, you know, did you notice any limitations in the in the generative generated results. You keep in mind the training that we did. Did you learn anything. And I think this is kind of one of the more interesting responses like, hey, you had to do this by yourself, then you used AI afterwards to practice the same outcome. Was that productive, would you have been better off had you not and had you not experimented with AI at all. And in most of the time, students feel like they're becoming more confident with the with the course outcomes by using generated AI, which is a really interesting result. Right. And then finally, the reflection, so that's part of the reflections. And then I asked students to respond to their classmates experiments and I find this to be critical to the success of these assignments because especially in an online environment where you can't just have conversations about this right. I asked students to respond to classmates because some students who found, you know biases or hallucinations or just severe limitations like oh this tool was was garbage didn't work at all. Other students kind of point out like hey I think you, you know maybe try this next time, and you'll get a different result. Right so it's kind of interesting to see students coach each other and how to use the tools. I'll just add briefly then. I have not yet required, you know, student use of chat GP to you or any other LLM. I have done some workshops in class with other tools, some of the alternative database programs that are being developed to research rabbit things like that I've experimented with in the classroom. I think requiring AI at this stage requires either you know an institutional insurance of equity so that I know all the students are using the same platform with the same capabilities, and the privacy is being covered, you know, and that would answer one set of questions. But I mean there's an even bigger set of questions having to do with with with labor exploitation with environmental impact. You know, knowing as I do the concerns of the university freshmen, not an insignificant number of them also feel strongly about about those issues right so I actually do intend to require extensive use of LLMs this fall. Since I'm I'm going to teach teach a new seminar on writing as a technology and I'll devote a substantial section of the course to chat GPT, but then the students will know what they're getting into in advance. And they'll be, you know, self self selected. I mean, there's always a scheduling but but some of them will be there because of my have my course theme. When I in my winter session course that was entitled AI applications. I did not require. I think everyone but one person wound up for their final project using an LLM. I did picture books for children. They were, you know, future educators K through K through 12. Most of them younger children. And that was a lot of fun. But I did I exactly I did not require it because there were students that don't want any of their information and put it into an LLM so and that was fine and they were able to do an alternate assignment. And that's the geography for me. And that worked just as well for that one student. And I think we've had a couple questions about the ethics and what are the alternatives and so I think it's great that those have been shared. But I guess we've got Greg's note now especially in the tech and science fields we're hearing from industry partners and if students don't know how to use these tools they're not going to find jobs so how do you. How do you bridge that gap, especially with idealistic freshman. I'm not going to bring it I'm very, I do a lot of I teach a lot of business students so I, I'm just very upfront about it. And we read, you know, articles about but then we get into. For example, I've had, you know, I do, I use a lot of like case studies, I are used to I used to here I'll back up a little bit used to buy a little packets from HBC, Harvard business school. And now with chat GPT I can create fictional case studies and I tell them that I've used chat GPT to create these great case studies about generative AI, much less I'm not even touching the other AI when you're inputting, you know, client data and you're coming up with who gets the phone and who doesn't, but these case studies as a way to talk about ethical uses of gender because they're going to have to know how to do this right and not to put input client data, for example. And to be aware of all these issues that are out there so I mean it yeah I mean it's kind of a no brainer, especially, especially in some and just, you know, applied sciences and business. So it's about how to it might be easier for someone like Rachel to make that connection, because of the types of courses that she's teaching. For me a lot of my courses that I teach are going to be, you know, first year, first year writing and first year experience literature. And so these students like I'm, you know, this is like I'm just covering the foundations and a lot of these things and I wouldn't say it's a harder sell, but I'm not this isn't there's last focus on like content expertise when it comes to the workforce and so it's it's going to be more challenging for me to show like hey here's a here's some some studies out there that show how AI is impacting business. This particular industry or whatever I can do that and sometimes I sometimes I do but it's not a major focus. I think it's something I should. I think it's something so I like oversee the first year experience program at my institution that's like part of my new role here. It's about AI training I shared that that HP module to students all students who take our first year experience course have to complete that training during their first semester at our institution. And so they do have this foundation, but I think part of this program should cover at some point, like, hey, you know, investigate how these tools are used in a field that interest you. There's something around this right that should be part of the exploration here, because I think this is so it's still so new there's still it's there's a lot of moving pieces. And, you know how how students use these tools in their field. And, you know, a few years or several years is going to be very different from how we currently use them. I suspect that's going to be the case right and so really it's a matter of understanding how to work with AI in ways that we haven't worked with machines in the past I think that's like the bigger, the bigger thing. And I think faculty who does a lot of things that we do throughout the day and I think I mentioned this before, part of like we also are an industry. And part of our job is figuring out how we need to value our time. And some things like you know update in F. R. or something like that for a job, don't require our original voice, it just needs to be done. Right. And we can, you know, there are ways to like be productive. And be better at our job using these tools and there are ways to use them poorly. Right so we're figuring this out ourselves. This is also true for us. And one thing very quickly too I forgot about my education students right exactly this is, they're going to have to know all about generative AI. So again with with creating personalized things but you're right, creating I hate to say this but creating a rubric. Right, let AI do it. I mean, yeah. I'm a little way from the, the last question, but one thing that seems worth adding here is. I had occasion to look at our general education student learning outcomes, you know, here here at OU. I think about how one might respond to the rise of generative AI, but in the context of a list of these outcomes. And so if I read off six categories I won't go into huge detail here but communication skills technology and information literacy critical analysis and scientific reasoning. I mean, it's very quantitative and numerical analysis community culture and diversity arts and humanities. I mean, you can already see that AI has an impact in all of those areas and can be approached from all of those perspectives. And, and in the case of a writing class. There's so much that I have to be approached in any one way, but that whenever one poses, let's say, relatively complicated social questions, you know, we might say, a university level of complexity right whenever one poses those sorts of questions. In the near future right now, you're going to find that AI related debates are popping up almost immediately. I mean it just seems to me that it's going to become part of any general education curriculum increasingly central, because AI is kind of almost the nexus. AI issues the relationship between between specialization and some kind of general human knowledge. I mean is that that whole problem is getting reconfigured very quickly by AI seems to me. I had a conversation with an instructor last week that primarily teaches first year students and they said one of their biggest challenges is most first year students are used to traditional learning they're coming out of taking notes and notebooks and reading paper textbooks. And they're getting to the university level where suddenly they have a lot of we are they have a lot of online resources. And now they have chat GPT that can help them make a few shortcuts. And then they sort of actually conflated it to autopilot on a test line. You still even have a driver's license even if it's going to avoid the accident on the road in front of you. I mean, what do you see with students that came in this year. If you're working with first year students and how they look at this maybe as opposed to the students who stopped brand new last year. That's a fascinating question. So part of the complication for me is that a lot of us increasingly a lot of our genetic courses are offered as dual credit right. And a lot of dual credit students take them. So increasingly there's kind of this overlap between high school and college, depending on the high school. But you're right that there are a lot of disparities in what kind of high school experience students have. And I think I noticed that in my own classrooms where depending their background they're either they're very tech savvy. And the onboarding experience needs to be needs to take that into account right so for if you know one of our principles is inclusion, or did something we care about a lot or we've been talking about a lot recently is digital equity. To me this is part of that part of that piece is like we can't just assume. If we want to roll out AI assistance and some of our some of our courses, we can't assume that our students are that there's one kind of digital native, right, we can't make that assumption. And we didn't make sure that we have a whole infrastructure that that provides an onboarding experience for the reasons that you mentioned right. The changes in, in student attitudes toward, toward technology are so mediated by the pandemic, you know, and the way that the young people who, who then were in college then were in high school at the height of the pandemic are now, you know, coming into these, the way they developed new forms of being social with one another new forms of communication, even you know, I think new, new affects or something I'm not going to start breaking out into into philosophical speculative rant here or anything digital native that they see the world differently, I don't find it that that phrase or that metaphor gives me any reliable information in terms of what skills they're going to have or not have I think that that's very that's quite various as I think you are suggesting Joe that that competence can vary widely. But I also say that in terms of using hypothesis. You know, as a way of sort of thinking about and mediating technology developments. There's a there's a corny professor trick I I've been doing where I find a particular web page it's probably about a decade old now. He talks about the value of print texts and annotation for for retentive reading. And I jump in there and I put a affection at least Narky hypothesis comment on it, showing the new the rise of the new digital annotation technologies, and that that has led to fun conversations in the past because students themselves I think genuinely unsure about screen versus print reading and what kind of skills and abilities are necessary when when moving from one to the other. And I would just thinking about my first year students I teach a high flex course high flex means that students can take my course asynchronously they can be in the classroom. Or they can be virtual import themselves and you know, during my class time, and what's amazing to me these are first year students how many of them just want to be in the classroom. And there's this pushback and I was looking at that the video a few days ago for the people burning the autonomous vehicle in San Francisco. And it's kind of a metaphor for I think a lot of our students and Nick pointed to a lot of these intersections right and they're environmentally conscious a lot of them and I mean thinking about these bigger issues. I think they're maybe coming out of that high school, you know, pandemic and they would have been what where they've been ninth grade I forget what grade they would have been but they experienced it. And there's pushback on a lot of this, which I think is is healthy actually. So skepticism, AI skeptic. Yeah. Yeah, among faculty and students, the rollout of this technology seems to spark emotions, unlike any other school or technology that I've seen roll out in higher ed. And so, as like. So my current role I've been having more discussions with, you know, operations and other people about, you know, what kinds of infrastructure, what does infrastructure support at our institution look like and I've seen it in the Q&A that some people, you know, have access to co pilot their institution is turning that tool on. And for us, I think part of what we're trying to do is making sure that there's like the we have. We have the voices of various stakeholders and we're, we're just kind of feedback loops where we're kind of like we hear and understand the different kinds of voices and concerns around this, and that that there are no surprises. And you know, like, boom, here's the technology, everyone has to use this. And this and this is all the, you know, the faculty members will face this in their classrooms as well like how they roll this out if they choose to. It does need to take into account the kind of resistance that that Rachel and Nick are noticing because I see it too. It's a little bit uneven. You know, I can't predict that's one of the things that's hard for me to predict is whether that whether or not that would just remain just one or two students this semester, or if there's going efforts, or if it could be highly politicized which might be a possibility and I think we should be prepared to prepare for that that should be part of the conversation. And we, we all saw Arizona State in the news with their partnership with open AI a few weeks ago and, you know, copilot's live on so many campuses. I think one thing that you mentioned Nick is you talked about just learning outcomes. You know, it's not just technology you have to be able to communicate you have to be able to do so many other things to actually make it through a curriculum and there is no silver bullet chat and he's not going to solve it all for us. And so it's finding ways to really enhance the other pieces of the learning journey for the students. I know we're running up on time so I guess we made predictions last year and I'm sure we'll make predictions this year and see where they turn out but if we were to bring this group back together in 12 months. What do you think would be different. And I'll start with you Rachel because you're my top left. Okay, so I think that you brought up copilot is going to be ubiquitous. But, but so that's my prediction that, you know, obviously within five to 10 years we won't you won't even be offering a course like AI applications to be like, you know, spell check applications. But by the same token I'm wondering with that political division there's kind of, I think there's going to be a growing, or maybe not growing but a steady group of people pushing back continually for various reasons. About the kind of just embracing generative AI. I think there's going to be like a definite, you know, little group of rebels there. And what about you, Nick. I, I think I agree with with with Rachel's protection addiction and but I'm going to offer another one that's, you know, not not in contradiction but I guess independent. I think we're going to finally see a real change in this long argument about about the decline of reading among among young people, I think, I think the explosion of generative AI and this long term cultural shift away from reading extended complex prose texts is likely to to really produce some meaningful change at the university level. And I think we as a culture don't decide anything so I don't want to use that kind of a rhetoric. But but I do think that the reading of longer texts is is becoming a skill that is very much a specialized skill and the movement of the maybe just the acceleration the production of more and more kinds of tasks, specific writing. I think mean that the even though the humanities curriculum is is going to have to adjust to just a fundamentally different sense of why people read things. So that's, that's a pretty, pretty bold and also vague prediction but maybe you'll be able to call me on at any. I like it. I like it. I did if you ask if you ask the AI you wouldn't get anything quite that witty. I appreciate it. Yeah, I mean, similar to the others I think Microsoft the co pilot options, you know Microsoft schools, Google schools. These will be these will continue to be adopted to ensure I think privacy and and equity is going to be a big piece digital equity equity. That's, that's going to be kind of bigger and bigger, because we're just making a, you know, so we're not making assumptions about about faculty and students around this right. But I agree with that Rachel that we're going to see like unexpected kinds of resistance to this. And I'm curious that's kind of a question for me what that looks like. And this could be again this could be like a political battle. It depends right this could be if if it only takes a few people to point out some things for this to be highly politicized and so that could happen. I hope it doesn't but it could be. And then, I think, I think there's going to be a big split in in how faculty redesigned the courses around AI. I think those who those institutions that are highly reliant on online enrollment will probably have more rigorous expectations around designing for AI in the next year I think we're going to see much more trainings in those institutions, or as other institutions that significant enrollment is in person will probably double down on the importance of these in person modalities and I think I'm seeing that in my institution we're trying to figure that piece out is there we're seeing we're revaluing the importance of in person modalities and yet faculty and students are preferring online more and more because it's so convenient. What the evolution of AI and higher ed is going to revolve around those exigencies right so nothing nothing like terribly interesting there I think. Yeah, I'll stop there. It's definitely an interesting time in education I mean four years ago, it was less than 20% of students were taking any online courses and within six weeks, almost the whole country was and it wasn't easy to teach faculty how to teach online or students to get there and we're just constantly learning new things and so I'm excited to have you all back next year so we can see if we were right. And so, just as we wrap up, you know we do actually have a really great resource for our current partners and that's hypothesis Academy, and we have a social annotation in the age of AI, which our next cohort will be launching on March 5. So if this is something that you're looking really focus on it's a great opportunity to learn from other faculty. And if you're just getting started with social annotation we've got our general social social annotation one on one in mid April. We're going to be sending all this information with the recording after this call. And this is for all current hypothesis customers. If you're not a customer yet we do have a great promotion for customers who signed up before the end of the spring semester, who's a discounted pricing also does give you opportunity to join our workshops and our hypothesis Academy just starting in a few weeks. And again, don't forget we've got our next liquid margins number 46 on Tuesday, March 5 that focuses on boosting grades retention and engagement with social annotation. Rachel Nick Joel. Thank you so much for taking an hour out of your busy schedules to chat with all of us. I know we didn't get to all the questions in the q amp a but I think everybody learned a lot and can't wait to do it again next year. Oh, it was a pleasure. Thanks everyone. Take care.