 Welcome, everybody. Welcome to the Future Trends Forum. I'm delighted to see so many of you here today. We have a fantastic topic with terrific guests, and I'm really looking forward to our conversation. We have a fantastic topic. This is a topic that we've been working on for several years, but most intensely since November of last year. The question is, how should academics respond to the emerging artificial intelligence technologies that we usually call generative AI? ChatGPT is, in many ways, the most famous or notorious, but we're also thinking of a lot of chatbots, like Microsoft's Bing, Google's Bard, and also thinking of generative AI in the visual field, where we have tools like Dal E and Crayon and Mid Journey, all of which let us make images. There's all kinds of issues. We have a whole bunch of sessions about this, and I've got a bunch of different experts I wanted to bring on stage today. Not in any particular order, I want to start off with our really good friend who's coming to us from late at night where he is, Brent Anders, who is an American University of Armenia. He's going to be coming to us, and it's probably almost time for bed for him, so I'm really grateful for him to join us. It's definitely pretty late over here, but that's okay. This is for a good cause. It is. I'm so glad you made time for us. Brent, you know how guests introduce themselves in the forum. They explain what they're going to be working on for the rest of the year, and I suspect you're going to say something about working on AI. So why don't you tell us? What are you going to be working on? Right. So my position at the university, I work at the American University of Armenia, and I'm the director of institutional research and assessment as well as we house the Center for Teaching and Learning. So one of the big things we're doing is trying to push this information to all of our instructors, as well as making it just available online to everybody, different things to do associated with how to implement AI-like chat GPT. So because of that, because of pushing that information, and because of being on forums like this, there's been lots and lots of calls by other universities around the world. So I've been very busy with different webinars, and as far as what I'm going to be busy with, I have a huge workshop that I'm doing. It's a month-long workshop for a university in Jamaica, a micro university in Jamaica. So that's going to take up a whole month. And then at the end of June, I'll be going to Tunisia. The university over there, they really want to have a hands-on workshop. So I'll be live in person there because there's lots, lots of people that want to physically go through and, okay, how exactly can I implement this? I teach writing, how can I use it here? What if I want to block it? So there's so many different questions. There's such a thirst for knowledge. So I'm going to be very busy this summer for sure. The other big thing, big news is I have a book coming out next week. So I'm very excited about that because it is, I think is the fundamental issue, and that's AI literacy. So yeah, so the book will be coming out next week. I'm going to push that out through Twitter. And as many people as I can tell, because there's been so much call for this, it's going to be called the AI imperative, empowering instructors and students. So the idea with this is that this is the key concept. This is the key area, the key skill, and it's actually more than a skill because it's kind of a social construct that we have to start thinking in a different way. So that book is going to be coming out. I'm going to push that out. I'm really excited about it. I thank you for the opportunity for me to share that because I think that's going to be the critical thing that all institutions will now have to address because of the way that AI is being so integrated with everything. If you don't have critical understanding of the AI literacy components to it, you're not going to be able to do it effectively and you're going to be influenced in a way that you're not even thinking about. So I see that as being an exciting future. Well, you're going to be front and center on this. My friend, that's terrific. Do you have a link to the book, to the catalog page or download page? Not yet. Everything is coming soon. So next week. Let me know so I can brag about it to the whole world. Great. Yeah. Well, let me add another friend to the stage to make sure that we can have a wide-ranging discussion. And this is our dear friend, Rubin and Puentadura. Rubin and I are sometimes seen as doppelgangers, which is really unfortunate because Rubin for me always brings good things, whereas doppelgangers are usually bad. And Rubin, you're coming to us from what looks like an alien world. Yes. This is another one of my AI experiments in generating ideas about learning spaces in the context of climate crisis and in the context of sustainability and so on. So this is yet another experiment. And so what could you do with a library if you suddenly had to think about it with a very different climate environment and it had to be resilient in that context? Very nice. So I'm coming from an experiment, if you will. Well, as are we all, I think. And friends, I like to brag to my students that Rubin is a friend because my students are enthralled and well so by his SAMR methodology. But I wanted to ask you, looking ahead, Rubin, what are you going to be working on for the rest of the year? Are you also going to be focusing on AI and education? That's definitely one of the key areas of focus. So one of the areas for me is AI and education, both saying, okay, so how do we leverage the tools and in particular in the context of SAMR, some of the results that have already been coming out in terms of how AI plays in the context of SAMR in terms of how you SAMR to think about the applications of AI as well as specific areas of application of AI that can be leveraged that we say for maximum effect. So that's definitely one area. But another area for me is going to be the question of how do you develop new forms of AI or forms of AI that are better suited, differently suited to different tasks and so on. And with the advent now of GPT-like, you know, hybrid tools that allow you to take the tool, run it locally, modify it, be actually able to see what you use to train the tool, train it using what you want. That's going to be a very active area of research for me. And the third area that I think I'm going to highlight in this context is the area of what I call the side effects or where do you start to get things when AI enters the picture that are not the obvious things? Some people are very focused on the very obvious things. So will my AI say very nasty things about somebody? Will it generate, you know, known false information? But sometimes, to me, some of the most interesting and at the same time, some of the most challenging, both challenging difficulties but also opportunities arise when you start to look at what happens when AI enters the world of work. What happens when AI enters the world of accessibility availability of information in ways that you might not have predicted? What happens when, for instance, places where you used to have certain types of friction in information, the friction goes away because of general availability of AI as a way to access, process that information? What type of side effects do you get? And some of the side effects can be good and some of them can be, you know, really quite difficult to deal with. So that's a third area for me. Those all sound awesome. And I have the feeling that somewhere in the Atlantic Ocean, perhaps close to the Azores, Brent and Ruben, you both will coincide and really have things rocking and rolling with this. Friends, if you're new to the forum, I'm going to ask our two kind guests a couple of questions. And then they're going to probably explode and cut loose and have all kinds of ideas. But the project here is for me to get out of the way and to get all of your questions and comments going. And I can see in the chat already, there's a whole stream of comments, which is great. I'm going to try and bring those in. But please use the Q&A box to throw in questions so that I can share them well with everybody. And if you want to join us on stage, you can please join us at any time. We're grateful to see you. One question that came in already is from our good friend Don Shawlis. And this is a kind of big picture question I'd like to start with. He's been using the term robo colleges to talk about what he sees as a kind of dystopian future. And this is one where faculty jobs are deskilled and increasingly replaced by AI plus other forms of data. And the question he had was, can you imagine such a fully automated robo college where all of the instructional work is being done entirely by software? Is that something that we should anticipate? I'll go first because this is right on top of my mind because towards the end of the book that I'm writing, I have a section there all about predictions, right? So this makes sense. So my answer is yes, very much. We need to be thinking about that. And the way that I envision it happening is that there's going to be plenty of low level courses that yeah, it'll make a lot of sense to have this system be used for a lot of those things. So that means that it's a low level course, meaning that it doesn't require that much additional guidance from an instructor. The AI, like Advanced Chat GPT, will be able to address all of these things. It'll be able to assist in the process, test in, hey, here's information. Now, let me test you on it. But we're talking more in depth, like guiding conversations, like asking you questions in a conversational way. What are your thoughts about this? So that whole thing, I think, will be starting to get replicated. And I see it in place first in low level classes. Then what I see is a little bit further down, I see where there's going to be university. There's going to be this big distinction in that you'll have universities that are less funded. They have way more robo courses like what you're talking about. They'll have way more of those courses. And then universities Ivy League or more well-funded, well, they have more instructors. And so now the view is going to be that, oh, that's a luxury. That's a privilege to have. You had a real life instructor. You had a real professor that was teaching you this. Did you go to an Ivy League school? So it's going to be that type of thing where it's going to be viewed as a special thing and enhanced thing. Now, the big thing that I talk about in my book with this is, so all of us need to really be thinking about that, right? As far as maybe not the near, near future, but the coming up future, I'd say within a decade, in that because of this, we need to be thinking, okay, well, what type of instructor would be the ones that stay, the ones that offer the most benefit, not just the ones that are subject matter experts. Because again, we're not gatekeepers of knowledge anymore because there's so much knowledge available through the AI. Now it's going to be a subject matter expert, but what's going to be even more of a premium is, am I enthusiastic when I give my instruction? Am I able to captivate? Am I able to motivate my students? Am I able to create this environment within the class where we have this, you know, community of inquiry? So am I giving that added value of students physically being there? Otherwise, why not take a robo course, right? So that's going to be the big thing. And those are the specific skills that instructors will need to maximize in order to be the most competitive going forward. And a prime example of this is just to look at what Khan Academy is doing. They recently released a TED Talk, and he showed some unbelievable stuff where it's conversational tone. It's a tutor right now. It's just an advanced AI Khan tutor. I think they call it Amigo AI. But the idea that it's GPT-4, but with additional processing so that it can do all sorts of additional things, that'd be a prime example of looking at it to see where the near future is going to be headed. Just a quick note. First of all, thank you. That's fantastic. Just a quick note from what you're talking about. There's a comment in the chat from Shelby Rosengarten, who I believe is in Florida. Shelby, I wonder if your state's governor would actually be considering this for political purposes. It's a strain note. Brent, that's terrific. Thank you. Ruben, what do you think? Don's vision of the robo college. I'll admit that I'm a little bit less enthusiastic about the robo college. Here's why. We can do it. I mean, if you're asking me technically, is it possible for a tool a la GPT-4 with some tweaks, modifications, et cetera, to do what some people are describing as robo college? Sure. That's not the issue. But my question is why? Why do we want to settle for robo college in that perspective? We have right now the tool, the capacity, the frameworks to go well beyond this. So I'll be honest with you. I was also looking in the chat and the question of, well, is it the introductory courses or is it the high school? I'm saying let's take a step back because that's going to happen. OK. Sal Khan has already put it out there and we're going to see things like that. That's not the question. That's not the issue. But right now, my thinking is more along the lines of can we use this to take a different approach? Can we use this to increase student agency? Can we use this to increase learner agency or ages? OK. I don't want to make this just about high school or college, traditional or any age, any group. Can we increase agency? Can we increase engagement? So for instance, I look at this and I say, look, with this tool set, I can think of ways of constructing learning that allow me to learn using AI, learning alongside AI in ways that I could not before. So GPT tools, absolutely, we're talking about using them. But as in that vision of the instructor, that is a little wiser about what the student doesn't know and so on, and we still do Calc 1 and Physics 1 and Lit 1, et cetera, just with the robot. That to me is less intriguing, less interesting than to say, let's take a step back. Let's look at the challenges that we're looking at. Let's look at the really difficult and interesting and browse that need our attention right now. How do we think about the learning differently using the AI? To me, there's an example. It's not the first time this has happened. I say, look, you've always been using a technology for many, many years, long-running pack computers are on the place. It's called the book. And the book is a technology and libraries are machines and colleges and libraries are machines. Unfortunately, at some point, some of that perspective has been lost. And to say, well, okay, these machines make certain forms of thinking, learning, creation possible. Well, let's take a step back and let's think about what AI can do for us because I think that's where the real potential lies. Don't get me wrong. I'm sure that we're going to see the Khan Academy, GPT eyes, et cetera, but I hate for us to get stuck there. My engagement is when people ask me, well, so what do you want to see? I'll be honest with you. When people say, for instance, so do you want to see the kindergarten or pick up their book and read facets of yes, but I don't want the kindergarten or to just be doing that. I want that kindergarten to have real agency in the community in which they live that kindergarten or to be able to work side by side with somebody who's in their late nineties and also say, hey, that person also has real agency and use the tool to build out the agency. In other words, it goes to a different notion. I'm saying, let's take the opportunity to really think about education in new ways, in deeply new ways. Thank you. Thank you. You two are both brilliant. Thank you. You can see this is an enormously, enormously deep subject with a wide, wide range of roots and implications. This is going a lot of great directions. I want to thank you and both. And I also want to bring up, if I can, a couple of questions that have come up in response to your comments right away. And again, Don, thank you for the really good question. Phil Lingard is coming to us, I think from Britain. Phil asks this question. Robocollages will be invaluable in Africa where the demand for education completely outstrips the availability of qualified people capable of being academic instructors. That's not a question per se. That's a comment. But I'm wondering what the two of you think about that? Here, I'll put it back up so you can see it again. Yeah. That's just the thing. It's one of those things. Yeah, I have enthusiasm for it because I see so much possibility. But I don't negate anything of what you just said because it's definitely going to be that way in that I see a disparity occurring as far as, I mean, to be honest, I really think there's going to be a completely Robocollage at some point where it's going to be 100% automated. That isn't necessarily great because I see so much value in the social aspect of the learning with other human beings. I mean, I very much value the idea of AI plus HI, right? So that human intelligence working together. So definitely I'm all for that. But even with this disparity, it's a now let's look at it in a different way. The disparity is actually a different thing in that there is no disparity if we didn't have access before and now we have all this access and we're also trying to do this whole idea of there's plenty of people that want to have college, but they won't be able to go to college because they have a full-time job. They have all these other things, but if they're able to use an AI system to go through the coursework where it's 100% tailored around them and what they're doing. Wow, that's very powerful. Talk about agency that they're taking. That's the other big thing that I talk about whenever I talk about AI, I focus very much on the instructor, but then I flip it on them and I say think about the empowerment that's happening now with the student because now they can learn anything through the AI. The instructor is still super important to be able to clarify, to be able to help them understand better examples of social aspects. I'm all about this whole idea of experiential learning cycle. We need to discuss, we need to talk about it, bounce ideas off of each other. So that's important right there, but there's just so much that happens with that student and plus one of the big things that we're supposed to be doing in higher education is teaching our students to be lifelong learners. Wow, the empowerment that now happens with AI and that they can fully be lifelong learners at any time, at any place. So to me, I see that as a big powerful aspect of this, even though there's going to be some of these differences that we're talking about. Thank you, Brent. Ruben, did you want to jump on that or? No, and again, I do keep coming back to some of the questions we need to get into a more specific scenario-by-scenario context because one of the key elements of some of the scenarios is, well, what do the people who live there who would be so to say using the resources, etc. What would they like? What would they want? What would they need to learn to use? And in some of the cases what we've seen is mismatches between structures that have been the gateways for better work to learning as opposed to what works, what... I do tend to use the term agency once again what brings agency to people on the ground. So I do keep coming back though. I have absolutely no quarrel with the fact that AI can make access to knowledge in certain forms in certain contexts available. So I have no quarrel with that being a useful application of it. I use it myself in that sense. It's no different than any other technology you might use. But I do keep coming back to my concern is more when we design the whole model around that rather than around the opening up of new possibilities. Okay. Thank you. The chat is just on fire right now. There's a ton of comments going back and forth. And I'm just staggered by the there's a reference to a famous 19th century political economic debate. That's a sign of just how wonderful all of you guys are as a community. There's a there's a related question and this is from our friend Glenn McGee in Florida who takes down the idea and your responses and takes it in a broader direction. What extent is and this is for you Ruben but I think Brent you can answer as well. To what extent does AI weaponize the class conflict in the class room. The power struggle between teachers and pupils. Oh boy. A huge huge question Glenn and thank you for bringing it up. The dynamics start shifting rather dramatically and this is something nothing new of course we've seen shifts in dynamics with all sorts of ranges of tools. Again I think this is the point at which you start to take a step towards thinking well okay so if you're looking at the power dynamics in the classroom what has generated those power dynamics and in what sense does the introduction of AI shift the balance of those power dynamics. What is the interest by all parties involved in shifting those power dynamics and why and the way the short answer would be that it certainly makes the power dynamics completely not tenable in any of the standard forms that you see right now longer term. I honestly don't think that's sustainable but how it plays out in the longer picture also has a lot to do with the image of education that you have the goals of education, the goals of learning and the different roles that people play in this context. I'd be happy to get a longer conversation about what this can look like and we get into questions like the goals of education and so on. But the short answer is that it is profoundly destabilizing to the existing dynamics and then it requires as part of the process a re-engagement with the idea behind those dynamics. So, excellent. I hope Glenn likes this because Glenn is sociology is his main structure for interpreting the world so I think he'll appreciate that. Brent, do you want to say something more to that too? I was just going to say that there's different levels of that because it's very multifaceted but even at the micro level of that you definitely have it right now. I've seen it everywhere as far as instructors and the whole aspect of well whatever I assign them they can use chat GPT. So that already is sort of well now the students have more power but again it's the aspect of okay but you have to work with the assignments in the right way the change that chat GPT and other AIs are affecting AI or affecting higher education is that it's improving it because now most of those assignments could have already been cheated on before. We had the internet we had SA mills those all existed it's funny I had this conversation with a bunch of new instructors so these were all new instructors I had a focus group with them and I asked them I was asking them questions about all sorts of things their experience the facilities at the university prior training I had so many different questions for them no matter what question I asked no matter what topic it always came back to chat GPT it always brought it back to chat GPT because that was such a power dynamic so the idea there is that they need to have better understanding they need to understand the way that they're doing the assessments it can't just be this big assessment at the end because then it's all the focus is just on well what's that end product that's not how good education is supposed to work with course alignment you're supposed to have formative assessments throughout where you can gauge their understanding and learning through that process and you need to hold students accountable so I'm not really holding a student accountable if I'm just asking for a product at the end if throughout the course I'm engaging with them I'm asking them they're having to produce in class they're having to present in class they're having to be able to back up what they're doing that's the important part so the agency the power dynamics it's in a micro aspect but then again there's many different layers that can relate the course and access and all these other things so yeah it is complicated but there's many different levels that we can work on well you have two fans now Brent one is me and also Amy McPherson in chat just says go Brent my instructional designer himself loves what you're doing and this isn't enough friends having these two geniuses on stage is just not sufficient we need this problem is so huge and I'm really excited to bring in our surprise guest coming to us from Egypt is Mahabali who's been on the program before and Maha has all kinds of thoughts hello Maha and thank you for joining us late in the day thanks for having me Brian so good to see you both as well a pleasure a pleasure I was wondering Maha this is just a supposition and tell me if I'm wrong here you're interested in talking about how to support faculty in AI and partly in multiple levels there's the fact of psychological, physical, physical emotional drain that faculty have been facing for the past few years and then there's also the sheer complexity of this subject as you've seen from our half hour discussion we've had so far Brent and Rubin everybody has been tearing all kinds of implications of this and I'm wondering if I could just ask you to start talking about this how do you see all of academia best supporting faculty in this age of generative AI okay so thanks for that very difficult question that I was asking also rather than wanting to answer your question first if you like let me pick up on two things that Brent and Rubin said that I like so I really liked Rubin's emphasis on agency and I'm not really sure why someone thinks there's a fallacy there I don't care about that but I think emphasizing agency is important and nurturing student agency and there's a lot of ways in the past and in the current situation where AI can interrupt agency or trick people into feeling they're getting agency but they aren't so it's really important to be very mindful for both faculty and students and I really liked what Brent very nice to see you after all the Twitter combos that this emphasis on you know how do we redesign assessment it's not about what AI can do so the first thing I was saying is in the question that I posted is faculty are already burnt out from the pandemic and from being asked to redesign their assessments and rethink the way they teach for a couple of years when we were fully online and then coming back oh let's do hybrid so try to focus with two people two places at the same time and so they're already pretty burnt out with this and then honestly Brent the kind of discourse around the robo university is isn't really not good for anybody's well-being so for faculty to feel like it's like what happened in 2012 with MOOCs oh universities are not going to exist anymore that did not happen I do not think that this is what's going to happen with AI what I think is going to happen is yeah a slight shift in what education needs to be so what you were saying towards the end like let's rethink our assessments let's rethink what we need to do because some of the basics can be done with AI the thing is I think a lot of times if you look really really closely at what chat GPT brings out I'm going to give the cake metaphor that you used on it on Twitter if some of you saw that so I was saying making a cake you can make it from scratch or from a box you can buy it ready baked or you can buy a Twinkie which is like factory made all of them look the same and honestly a lot of what chat GPT produces if you don't know how to prompt it very well it's like a Twinkie it's very basic and bland and not unique and does not have personality there are a lot of other AIs I was saying in the chat that do the search and help and we need to think about how are these things going to help certain students especially those who are for example non-native speakers of a language and a reading is difficult for them but there's AI that can help summarize the reading or write it out for them in a way that's more accessible to them and how can that help us in education how can how can we make sure that students are using AI in the right time of the process but also what are the emotional aspects of the whole learning process that I don't like with effective AI I think there's a huge difference between being cared for by a human being and being cared for by a machine and I think we all need the human being and that education is not just a cognitive thing right a lot of times when we talk about MOOCs or AI we're talking about a cognitive thing yes you could potentially use AI for mental health therapy I am sure it doesn't do a great as great a job as a human and I'm sure human beings mess up as well but when human beings mess up they can fix that they can find another human being to fix it right I think a lot of this talk is a little exaggerated for the moment there's a lot of AI that does really really really useful work in the world like talking about climate change I met someone who done AI that could detect forest fires as soon as they start so you can stop them that's useful AI because you don't want a human being to put themselves at risk and be there in that spot and you can't possibly probably even do it with what we can do with the text generator AI and with faculty the issues I'm facing right now with someone who works at a center for learning and teaching is there are some people who are willing to come in and try the AI and the best people who are in a good place right now are the ones who have tried it, they know its capabilities they can intuitively see if students are using it and they're integrating it into their classes to experiment to learn from students because students use it in ways differently than what we expected them to do everyone has that energy and asking them to redo their assignments is what we're trying to help them do that's also a lot of effort and I think it's really important to figure something out I don't have an answer right now but I really don't think we should be making them feel like the the AI is going to replace them at the moment I think we can push them definitely how can you make your assessment more meaningful how can you motivate students and also which aspects of what you teach and how can you learn from a box does it really need to be completely baked from scratch sometimes it does but sometimes it doesn't and a lot of learning is happening should be happening outside the written moments if you teach engineering or if you teach sociology your students should be going out into the world and doing physical things that cannot be done with AI or maybe they can be helped with AI but there's actual work like if you're doing actual community based learning and helping an actual community solve a problem that will really show you something physically improved by your work and then if writing it with AI produces something accurate that's fine a lot of times it won't be accurate but if it can that's not a problem so that's kind of where I'm trying to get faculty's head at you know if you can't experiment with it if writing is not the thing that you're teaching focus on what students really need to learn to do in the real world and also talk to people in industry and find out what they want from your students because sometimes they do want them to learn AI and sometimes they want them to be able to do things from scratch as well and it depends on the field very nuanced conversation well Ma, thank you so much I mean this is terrific hearing so much from you on this when people talk about making food from scratch I often think about a baker I knew who actually bought an acre of land in order to raise grain from scratch in order to mill it and turn it into wheat and that's extreme most people haven't done that but you know where scratch comes it's an interesting setting we have a whole bunch of questions Ma, can I keep you on stage for a bit excellent excellent we have a whole bunch of questions I don't think we can get to all of them but I want to try and pick out the ones that are really really telling here's a very thoughtful question and this is from Kate Herzog she's very angry she says, do you see AI as having the ability to provide socratic questioning it does a little bit you can try it it's not great I've tried the whole using it as a tutor and give me feedback it's not bad I think if I asked it on like very advanced PhD level stuff it wouldn't be able but like on first year level type of thing that does okay what about now I don't know Brent with the more advanced chat GPT plus I tend to focus I try to focus mainly on the free version of chat GPT just so that the maximum amount of people to have access to it even with that and Brian you know about this too you can use it in so many different ways as far as creating simulation creating built in games creating this whole aspect of it's going to ask you questions it's going to have you go through a process that can be used in a lot of different ways but again I mean I'm definitely not trying to hold anyone back or put anyone down in any way I'm just trying to be sort of future thinking in that yeah I mean I suffered through COVID as well and it was very draining it was very hard but I always tried to take the approach that I'm looking at things both as an instructor as well as a student right because I had great experience as a student and I had great negative experiences as a student both online and face to face so the thing is yeah I have instructors that are full on wanting to learn more about chat GPT but I have plenty of instructors who are burnt out I understand that but they don't want to hear about chat GPT and to me okay I understand you're burnt out and I want to assist as much as possible but now I get upset because now guess what you're doing a disservice to a student now you're not properly helping the student and so what I tell instructors because we had a big issue here with the whole aspect of AI detection and plagiarism and all that and so one of the big things is that I'm always about instructor agency in that they have control they're deciding when to use chat GPT when not to not the student so they're having to be able to but the big thing I tell them is that you should always be directly telling the student hey for this assignment we're not using chat GPT we're not using AI for this assignment we will for this assignment I'm trying to build skills mastery here so in the future we're going to use it for this other assignment now what I really push for the instructor is that hey for every assignment tell the student whether they can or can't use it because students are going to start to push more and more why can't I use this why can't I use my AI I have chat GPT on my phone so it's a part of me because I always have my phone with me so the fact that you're saying I can't use chat GPT I can't use my AI I don't think that's fair that's what a student is going to start to say because it's going to be so integrated into everything it's going to be fully integrated into your word processor fully integrated into Adobe any type of web page you go to any type of app so it's going to be this whole thing of the student I really see the student starting to develop this idea of it's a given that AI is used for everything so again that's the thing is that you know and again I fully understand the whole aspect of us you know not wanting to change yet again but we need to at least be prepared to answer that student and be able to say exactly why we're not using AI for this one because in order for you to be able to be a good engineer you have to know this concept here so you can't use chat GPT for this because you're going to have to use it over here where you can use chat GPT or another AI but you have to develop this here so there's going to be a mixture of these things so it needs to be really well thought out on different levels yeah you're right and that students are indeed saying that and the funnier thing is that students are using quill bot and not counting that as AI oh that's just open all the time it's kind of like that happens to my text I didn't there's no agency here it's just it's transparent to them that they're using quill bot they're not counting that as part of the AI process I think you're right in taking that time to explain but then the faculty actually really need to have a little bit of AI literacy to be able to tell them that I think the other element of this that we haven't talked about and I know Brian knows about this and we talked about it is like it has the training data sets that chat GPT has been trained on does not have a lot of knowledge from my part of the world I'm in Egypt right and so my part of the world not just the Arabic language text that it brings out it's fluent in Arabic but it's thinking is like a western person speaking in Arabic and it's knowledge about my part of the world really messed up so whatever level of accuracy you can get on a hundred level psychology course on chat GPT it doesn't have that about any knowledge about my part of the world and it hallucinates like crazy on that so that is really frustrating to see or good I'm not sure but I kind of feel like I'm a computer scientist originally my undergraduate work was a neural network I know how machine learning works I know they could have trained it better I know they could have allowed users to train it and they haven't done it that way they've done it where it learns from what they gave it partially to try to prevent it from doing bad things but why is it continually treated that way someone was talking about the visual AI smiling like an American smile why does visual AI create people who smile that way there's a lot of hegemonic knowledge built into it Maha this is terrific and both of you these are really really deep answers to this question again we're doing a high wire high speed graduate seminar on this topic right now I want if I could Maha something you said reminds me something that Rubin wanted to talk about and I want to throw to that but before I get to that just a couple of quick notes first of all everybody in the chat do you mind if I post the chat to my blog anonymized this is already in an unbelievably rich chat so please just let me know in the chat if you don't want me to use it I also want to in the spirit of Brent's about the stuff is already there we have to we can't resist it in that sense of not using it in the chat I just put a link to an article this morning from a wonderful librarian Barbara Fister who makes an interesting comparison between chat GPT and Wikipedia which I strongly recommend it's a great introduction to chat GPT and also some wonderful stuff there and thank you all for the approvals Maha was talking about training datasets and Rubin I know you're really keen on getting away from the giant proprietary corporate black box datasets and tools right now do we have options in the DIY and open and Libra worlds? Yes the short answer is yes the long answer is yes we do so here's where things are at right now we've had some tools have been developed in varying forms of Libra from scratch for instance Bloom which is a EU project has been developed from that and that's already showing it has some promise there have been some issues with Bloom is a very big model if you're going to deal with some aspects of it but there have been in recent days some really exciting developments so what happened is Meta released one of its tools Lama as it is open source with some restrictions on it and so on Stanford took it and they made Alpaca and Alpaca was a small scale model still exists you can download and use it but it started to show the way towards a model that you could train rapidly and efficiently and actually run on something that doesn't require super supercomputers to run Bikunya has been a derivative of that model and now we have Stable Bikunya which about four days ago it was released in a 13 billion parameter version and the beauty of all of these models and this is just the beginning okay using an explosion of these is that all of them have at least a couple of key features number one you can actually see the code it's you can play with it there may be restrictions usually the restrictions are on commercial not on commercial use and even the commercial ones are negotiable so to speak and but the second thing which goes to what I was pointing out absolutely correct that the training models that have been used for many of the well known engines you know speak they have very serious limitations and speak very little to parts of the world other than parts of the US and the EU but these can also be trained and what we're seeing is the cost and difficulty of training is plummeting and I really mean plummeting we're going from you know you need a stable of super supercomputers only a few companies in the world have to well you can do it if you can purchase enough AWS time to you can do it for about 600 bucks to now we're around 200 dollars to do a complete training from scratch of one of these so we are really rapidly converging on a whole new generation and the beauty of this generation is we can make it into what we want it to be with of course the constraints of okay this is how the engine works large language models have some constraints in terms of what they can cannot do how they appear etc about how you mentioned your background in computer science so you know very well in the the limitations of this as well so there are of course many of these but there are also opportunities and I want to encourage as many people as possible to start getting your hands into this would I use these days stable de cunha 13 billion to teach a course probably not it's impressive but there are instabilities there are places where I'm scratching my hand saying what went wrong there etc on your hand I can actually train it in some ways I can say this is a body of knowledge I'd like my students and I to explore jointly I'm going to train the engine on this so it's not just spouting whatever hallucination it thought it it came from its you know it did so many random hops and then it went off way in the wild yonder right that's where hallucination is it's a result of the you know taking one step too far down that random path walk but instead say no this is focused on this body of knowledge or this context or this perspective and this opens it up again this is early days still but these are exciting early days I'm not talking about the type of thing where I can say a decade from now or two decades from now we'll be able to do this I'm saying right now you can actually start playing with some of these models online on your systems and really explore alternatives and I think again would I ask somebody who hasn't ever done this to just do this and immediately using the question is it somewhere where I would encourage people to explore and to start the process of thinking and building out absolutely yes so perhaps we should anticipate something like a maybe a library driven or a consortium driven or a non-profit like Internet Archive on doing something where the data set is much better perhaps I'm just making this up perhaps entirely trained on .edu domain content perhaps entirely on CC license content but then the second wave after that will be having students and faculty doing this see that that's just it Brian like to me I'm still amazed that there isn't a competitor to chat GPT that is purely academic like I still can't fully grasp it because of everything that you were just saying why I mean I can understand like my university I mean it's a small university but all these huge universities why couldn't they come together and exactly like what you're saying hey there's plenty there's millions of images out there that are public domain that so many people would be willing to create and and give so that it would be locked within .edu so that would be purely academic on some level again I wish that there would be more push for that and I think it would be really enhancing for all of academia and to especially make sure that it's international so that he can have all this type of flavor and understanding to really sort of bring all of us together so I can really appreciate something like that would Alpaca wasn't that Stanford students would that be an example yes absolutely and there are projects for instance around Bloom that have been similarly focused with academic institutions but I think there's still a huge huge range of possibilities so absolutely you've seen some of the large academic institutions or EU projects with Bloom that undertake this but again I think there you know I would love to see many many more people in the pool because I have great respect for Stanford's AI department don't make no mistake but that's one perspective from one part of the world one pool of students and professors that they draw from and it's great that they're there but again the more people we get trying out and playing with different versions of this the ritual be I mean if you look for instance an image generative and now beginning to see some of the AI models have been trained at least at top tiers to start exploring different types of image representation different types of well what if you don't train on the lion data set but on a different data set of images do you start to get different perspectives and so on so again the more you need to get all of the different perspectives that don't just come from one part of the world of a set of parts of the world if you will what's possible here excellent excellent friends I know you have two minutes can I say one thing really quickly what I'm concerned about is if we have limited resources of where we spend money to improve education that we spend so much more on it on the technology rather than on the people that's what I'm concerned about thank you thank you always always about the people we have one oh shoot we have so little time and we have two questions so this is a this is a big question and I'm wondering if each of you could just say one or two points in answer to it because it's a it's a great one and we need to come back to it this is from Jenna Linskins at Ithaca College in New York and and I'm just wondering if you could just touch on this a little bit each of you how is a guy changing the way people will work in the professional world and then what's the impact of our students instruction we provide today if you just toss out a quick thing we need to circle back to this Jenna but I'm just wondering if each of you could just say a little bit on that yeah so I'll I'll start off I'm sure some of one of you will talk about sort of computer programming computer code so I'm not going to address that part but I've been trying to secretly observe other people in different facets and to see how they use chat GPT because generally chat GPT that's the big one right and it's funny because I see more and more people start to try and use it to see if they're correct and what they're doing so it's like they want to have some sort of co-pilot to to make sure that they're doing it right or to give them a starting point so again and I know one person who's using it where this is their new search engine they don't even go to Google first they'll go to chat GPT first then if they can't find it if that won't help then they'll go to Google so that's a huge different shift so I see a lot of people starting to incorporate it just in their overall process just to help them as a co-pilot. Thank you. Thank you. Great one. Yeah I think from my perspective again I wish we had more time on this one but very briefly we're going to see in terms of jobs and how AI interacts with jobs the model that's very clearly going to be dominant is going to be one of task replacement rather than job replacement the thing about that is that sounds great at the beginning because you say tasks are getting replaced but yeah but hold on this is one of my areas of research into side effects both good and bad on the good side AI we have actually a new research paper that came out just a couple of days ago on the NBER that says that workers who were at lower income levels lower skill levels etc can perform and be rewarded as though they were at higher levels and that AI has a leveling upwards effect if the keyword if contextualized appropriately within the context of the job so that would be a positive side effect so we say of having AI in this task space but the negative side can also come in and we're seeing it I saw it going by in the chat that this is coming in and some writing jobs for instance where some people are saying oh you know what what I can have with chat GPT is good enough so I'll keep you on and you can still help me prune through this but because the bulk of the work is now being done by chat GPT whatever it is they're saying I'm going to pay you less or I'm going to use less of your time or I'm going to generalize and that's a negative net effect so it's neither this is one of those cases where I come back to give people the tools so they can understand better leverage the tool better and advocate for themselves and say hold on if I am represented in whichever way it is by a union by a structure by some type of managerial relation whatever it is so I can position myself and my workers and we can work together to define situations where you're looking at the better side effects rather than the more negative one so that's the quickest place you could make have a minute you do building on what Ruben said I agree with everything you've said there and I will just add a good contextualized example is translation we have auto translation now it's pretty good a lot of languages we still have human translations and it makes them a little bit more productive but it doesn't replace them but it does replace them in situations where you're not going to pay for translator anyway you're stuck in a country for two days that you don't know the language you need a translation so you use it that's helpful but it doesn't do professional translations and when people use them for professional translations they're really bad they still need the humans and so I'm hoping that that will be the direction on the text generation AI some other AI is doing stuff that humans can't do anyway and those are the AIs that we should be really proud of and happy with and making sure that they work well and thinking about how can AI support people with disabilities for example like I was just talking to and you put a visual AI on a walking stick right that's fantastic well first of all thank you each of you for such really concise and yet rich answers to that fantastic question and thank you very much for that great question Jenna where I have to circle back to that we have shattered our time limit and as a result I have to wrap things up which is enormously frustrating because this is terrific what I think is we need to have each of you back to do a kind of workshop version of the forum so maha to talk about faculty support how to do it right Ruben to walk us through using some of these DIY tools and Brent to show us how to do this in terms of AI literacy so I'm just putting that out there I'd love to hear that but really quickly in order to wrap things up tell us quickly how can we keep up with each of you what's the best way to follow your work Brent starting off with you your research and your book is Twitter the best way yeah Twitter definitely Brent Anders I'm putting out new information and I'm really big on trying to make infographics to help people because again the idea there is that to try and make the really complicated stuff simple and easy to understand through just a simple infographic so I'm trying to do that and I'm trying to also create some short videos so people can just sort of learn really quick some key things and then move on from there and yeah I'll be pushing information about my book as soon as I can thank you Brent every one for my part probably a combination of Twitter and LinkedIn is a good place to be right now I used to be very heavily Twitter based these days for reasons that are of common knowledge Twitter is not quite the place it used to be so I'm looking to rearrange that and I'm also looking at a few other social tools but the simplest way right now is Twitter and LinkedIn thank you thank you and Maha I am Bally underscore Maha on Twitter and I couldn't leave because of this AI stuff but I'm also unmasted on a little bit but I think my blog blog.mahabeli.me I post a lot of incomplete thoughts about AI and apparently they're helping people because nobody knows anything really maybe Brent knows a little bit but all of us are like really just experimenting and discovering together so sometimes it's useful to ask questions more than give knowledge it's so crazy that you said that because you know in my mind I'm like yeah we've been using AI for like two three years now right five months chat to the team's only been around for about five months it's interesting it's like pandemic time but reverse thank you but Maha, Ruben, Brent thank you for being fantastic guests we're going to follow up with each of you and thank you very much to all participants for all of these comments all of these great questions we're going to post these up on the blog probably several posts because there's a lot going on here I want to thank everybody for a fantastic fantastic hour to keep going with this you've heard that everybody has their different handles and different places to find them so please use the hashtag FTTE if you can in order to keep things going here's me on mastodon and twitter if you'd like to dive into our previous sessions on AI related topics just go to tinyurl.com if you want to look into our upcoming sessions which cover other parts of the future of higher education from campus economics to sexual assault to faculty data just go to forum.futureofeducation.us and thank you again for the opportunity to think together to collaboratively grapple with the future of higher education I hope those of you in the northern hemisphere are enjoying spring as it gives way to summer above all I hope you're all safe and take care we'll see you next time online bye bye