 Welcome everybody. Welcome to the Future Trends Forum. I'm delighted to see you all here today. We have a terrific topic and one of our great, great guests and we're really looking forward to our conversation. For about a year now, we've been talking about AI and we've often been talking about it in terms of the great corporate giants that are doing immense work in this field. Open AI, for example, funded and owned more or less by Microsoft. We think about Metta. We think about Google and we think about other companies as well. Amazon and Apple that are working in the space and no shade here. They've done immense work, very, very influential work for your profile work. This is what we're reacting to in higher education, how to respond, for example, when Microsoft infuses AI into its office products. But at the same time, the open source world has been very, very busy. There have been a lot of open source projects which are really advancing the field in all kinds of ways. After all of our forum sessions on AI, I wanted to make sure we'd have a chance to really dive into these. There's no better guest I can think of on Earth than Ruben Puentadoura. You may know Ruben Puentadoura as the creator of the Samar method of thinking through educational technology. You may know him as a superb consultant and thinker of education technology. You may know him as part of the Brian Alexander lookalike contest, but overall he is just a fantastic, fantastic person and without any further ado, I would really like to bring him up on stage and hello Ruben. Hey Brian, how are you doing? Very well. Where have we found you today? You found me in, well, in Williamstown, Massachusetts, and in what you're seeing here is part of my exploration into possible learning spaces for the future. This here is an exploration of how you can take some modernist ideas and blend them into solar punk exteriors. You can't see the solar punk exterior in this particular rendering, but part of the idea is to say, well, how do you get from here to there where climate change is concerned? How do you take buildings that were designed with a modernist idea for education and get them there? But that's a topic for another day. I can see the green behind you, and I can also see a kind of brown or tan-colored stylized musical note. We've got the modernist aspect there. Well, Ruben, we're going to have to bring you back just to talk about this in all seriousness. But we ask people to introduce themselves in all kinds of ways, and one of the traditional ways is to praise someone's beard, and your beard is, of course, terrific. But I'd also like to ask, what are you working on for the next year? Are you mostly focusing on open source AI and solar punk educational design? Yeah, that's a great question. So a large portion of what I'm doing is indeed AI, not just open source, but a very large chunk of it is indeed going to be open source. And I'm looking at the whole range of uses for AI. But in particular, my main focus right now is on generative AI, both image and text, as well as some of the deep learning applications. And some of that is going to be commercial. So I do use mid-journey, I do use chat GPT-4, I do use Wolfram Alpha, and they're great tools. Make no mistake. But I also find that the open source, the Libre AI tools are just as essential to what I do, and in some ways, perhaps more so than the commercial ones, because they give me both a tool set for carrying out ideas, for exploring ideas, but they also give me a tool set for digging deeper into understanding what's going on, into understanding what the tools can't do, why, how they do that, et cetera, in ways that, frankly, they commercial tools because they are close boxes for the most part. I cannot use for that purpose. So yeah, that's going to be a large chunk of my work for the coming year. Oh, excellent, excellent. Well, then I just want to start off, friends, if you're new to the forum, I usually start off by interrogating our board guests with a few questions to get the ball rolling. But then I want to get out of the way and open the floor to all of you for your questions. So as Ribbon and I speak, please think about your questions. And remember in the forum, all questions are good, we're excited about all of them. So if we're throwing around terminology, please don't be afraid to ask, what is Libre mean, how is that different from open? And please ask us questions at any point. And again, if you have any technical questions, please ping Wesson, who will be glad to help you. I mean, my first question, I guess, is Ribbon, if you could just talk us through first, what are some of the really important open AI tools in generative AI that we should be looking at as academics? So I would say that if I were looking as academics at AI, I would start right away with the large language models that are available in the open source, last Libre world. So I would look at tools that are based on a model that is not completely open source or Libre, which is based on Lama, which was released by Meta from its work and now Lama too, which is somewhat more open, but we still don't have full access to the training set. As well as some other related tools, Red Pajamas, for instance, which is truly open, since we do have access to the training set, the weights, models built upon it, et cetera. And some other newcomers to the field, Falcon, for instance, which is done by a consortium with some contributions from the Middle Eastern countries, as well as some older tools like Bloom, which was developed by the EU. So there's a huge range of them, but really right now the hot developments, if you will, are all happening in the environment of Lama, Red Pajamas and Falcon and others. That particular family or set of tools are very similar to chat GPTS, how they're designed as large language models, what you can do with them, how you can use them. So I would definitely say that's one set of tools that's worthwhile looking at. And then the second tool is the generative image tools and stable diffusion is sort of the ruler, if you will, of the kingdom there in terms of the most advanced, the most developed tool. And once again, that is indeed open source. You have access to the model. You have access to the training sets based on the lion training sets. You can see what images it was trained on and exploring. And so you actually have access to, again, all the necessary tools for digging in. So if you were to say, well, I've only got limited time, those are the two I would focus on. If you have a bit more time, it's worthwhile digging into it. This is a rich cornucopia of Python, what I call a Python Lego kit for building other tools. And this gets a little bit deeper. You need a little bit of coding experience, but you don't need to be an experienced programmer. You can use tools like chat GPT and now some other Libre tools to help you with constructing solutions. But these allow you to look at data, complex sets of data, such as, again, for talking climate change and you want it to make sense of what's going on with the fires in Canada. I noticed you posted on your Twitter stream, sorry, I'm not gonna call it the next stream for now. Nobody has. Nobody has, yeah, I don't think so. So I noticed you posted the map of the Canadian fires and that tells you what's happening, but if you wanna make sense of how these fires are progressing, how they're connected to climate change and so on, you need a deep set of tools to do that. And you can build those out. Now, that's a little bit further along, that's a bit deeper. I would recommend starting with the large language models and the generative image models as a good place to sink your teeth into the topic and then as you become more comfortable with those build out from them. I see, I see. Well, that's a great list right there. And actually, let me interrupt myself. We have a good clarifying question that's from our friend, Tom Hames. And Tom asks, if I can press the button, right? What tool would you send normals to who are fascinated by Chatchapee in order to demonstrate that open AI is not the only game in town? And I'm gonna infer that by normals, he means somebody who isn't used to coding, somebody who isn't used to the open source world, someone who just wants to, you know, a peek into that world. Okay, so right now, the simplest tool to just get a taste of what's going on is there's a tool for iPhones and Android phones that is based on the RedPatchama team has put this tool out. And we can, afterwards, Brian, I'm sure we can add the links so people can download it from the archive later on. I'd rather not get into 10,000 URLs in the chat. But those tools are very simple. You just download them, you put them on your phone or your iPad or your Android tablet or your Android phone, whatever it may happen to be. And then allow you to run small, these are the smallest models, don't expect great miracles from them, to get a taste of what's possible with open source AI. And then if you go beyond that, again, Brian and I discussed this a little bit before, this session, but we're gonna have, or you already have, in fact, I think available to you from the invite link. Several of the sources, which are not quite as simple as just plug and play, but they are pretty simple. You know, they'll run on a Mac, they'll run on a PC, you download them, you install them, you put the model you want inside, and you can run it. So it's a little bit like saying, you buy your car, you put, you either plug it into the wall or you buy some gasoline according to whatever you're using these days, hopefully the former. And then you're ready to roll. So those are not quite as simple as the phone or tablet-based tools, but they are pretty simple. And of course, again, if you really wanna start digging deep, then all of these allow you to get under the hood, change the code, see what the code is doing, et cetera. And the sky's the limit once you're doing it. And that's something I wanna emphasize right now, Brian, because one of the things you read in the press is a lot of, oh, nobody knows what it's doing. That's a flat out lie. I can look at the code running on my little AI box and I can tell you, oh, okay. So here it's running into an issue here. Here's why it gave me this wrong result. You can actually do that. Does that require a little bit more digging than just running it? Yes, but is it as difficult or opaque as some of the reports have made out to be? No, that's not true at all. Are there complexities where we're still trying to figure out, gee, this is working exactly how, what are the fine details? Sure, but the bigger picture? No, that's perfectly accessible. That's a great point. So starting with the red pajamas and of course, starting with Tom's question. Thank you, Tom. It's a very good one, as always. Friends, if you're new to the forum, that's an example of a Q&A box question. So you can just type that in and I'll flash it on the screen. And speaking of flashing on the screen, I just shared in the chat a link to the login page for today's event, not to be redundant, but that's where Ruben listed a whole series of tools and platforms to explore. So you can just grab that at any point. Then let me ask a second question and I want to open it up. We've been talking so far in terms of I think individual users. So what happens as Tom is a normal person. You've been talking about people who have some experience with code, but if we can just scale up our damnation for a second, think about an academic institution, say a college or a university or even a division or a department, what are some of the advice you have for these organizations as they look into the AI world and consider the open source for it? That's a great question, Brian. I think one of the first things I would suggest is that this is not a question that is a just wait and see type question, nor is it just a fad or a trend. I realize some people have been claiming that that simply isn't the case. This has been coming a long time. Yes, it's true that chat GPT was, wow, a big surprise because people, so we could do, but in fact, if you'd seen it in following large language models for a while, you knew it was coming sometime in, if not this year next and similarly with image generation and similarly with other deep learning tools. So it's important to realize that this is not a just a flash and the pants on that showed up one day and it's going to go away. It's something that really deeply transforms how we think about learning, how we think about academia and how we think about the world of jobs in general, the world that we inhabit. So that's the first thing. The second thing I would say is you need to engage in serious conversations around this because one of the things that concerns me is a lot of people are being told, well, you're on your own folks have fun and that's really not a great way to do things because in particular, one of the things I would argue about AI and where we're at right now and where these tools are right now and where Libre open source AI is, is that it allows us the space for taking a step back and saying, hey, how do we really achieve what we wanted to achieve in academia? I see a lot of people, for instance, worrying and saying, oh, we've already seen the client of enrollments in the humanities, that's it. This is the death note. That's exactly wrong. If anything, Ryan, I would argue that this could be the rebirth of the humanities. The place, if we think about it with the appropriate context, the appropriate tools and the appropriate support, it can be the place where we get back at the heart of why the humanities, what it is that we want out of this. Why? Because one of the things that we need to make the best use of AI, the most equitable use of AI as well. Ferris, the one that gives people individual agency over what they do in their learning, in their work, what they do, requires critical thinking at a very deep level. And we need to get back then to the thinking about the humanities, what we do in the humanities, writing, reading, the processes associated with that, sharing, discussing and so on, as a way of getting at the critical thinking approach at those tool sets for understanding. And similarly, and Jo and I have spoken about this before, and I'm sure some people in the audience have heard me talk about this before, other aspects also come to the foreground, for instance, of reading. For instance, the uses of metaphor as a tool for thinking. If you're thinking about complex phenomena that are likely to lead to black swan-type phenomena, if you're trying to think about how you develop resilience and anti-fragility in the face of complex phenomena, such as climate change, such as pandemics, such as huge global migrations of people for multiple reasons, you need to be able to think in terms of futures, you need to be able to think in terms of possibilities, you need to be able to think in ways that don't just reflect the same way you've been thinking all along. And the uses of metaphor, which is something that you learn from reading, not just reading, watching, film, from theater, from performing, again, from the rich scope of what you get from the arts and the humanities, is also a key component of that. So this is the other aspect that I would give as advice. Think how you bring those conversations to the foreground and think about how you support them and how you support your faculty in terms of opportunity, support in every way that you need it, whether it's release, time, et cetera. And yes, finances were appropriate, resources were appropriate, et cetera, to engage in that conversation. Excellent, what a great answer. That is a humanist who plays with technology. I cheer this on a great deal. But let me stop asking questions, friends, because you all have questions that have been piling up, but I wanna make sure that you get a crack at Ruben, each of you. And this is one that looks ahead a couple of weeks, and this is a very practical one. And I'm gonna put a little spin on it. Just, you see what I mean. This is from Carly Brady at Medicine. He says, plenty for our fall in service. What would you prepare for an agenda for a presentation to faculty on AI? And it's a great question, Carly. And the spin I was gonna put on it is what part about open source AI? That's a great question. So basically for a presentation on AI, I would look towards thinking what will the faculty be doing with AI, right? So I would put in a few, what I would call, it's not exactly low hanging fruit. Rather, it's a question of an understanding of AI in terms of the type of things that faculty do. And this is of course where I need to sit down and think, okay, so what are your faculty's primary activities? What have you seen them do? What have you seen them be curious about, et cetera? So that meets their interests. So I would look to saying, okay, this is what AI does in terms of what you do. So start them off with an actual get their hands dirty, okay? Just play with the toy a little bit. And then in terms of introducing Libre AI, just look to some type of activity that helps scaffold understanding of what's going on. And for instance, what I think I find is a very rewarding activity is to take a very small model, one of the, well, small, 7 billion parameters. I know that doesn't sound very small, but it is compared to some of the larger ones that you might use. And then take a 65 billion parameter and show the type of patterns that the model detects. Because one of the things is people keep thinking about, well, does chat GPT or do the large language models, large language models think, and yes or no, they are pattern detectors, pattern constructors. They are tools that can both elicit inferential patterns instantiate inferential patterns, create something according to a set of patterns, but there's no thinking behind it. That's important to realize. The AI doesn't want to do anything. It doesn't think about it. And for faculty to be able to try out to get their hands dirty with that and to see how it can become less or more effective in different ways. And that the Libre AI tool allows you to see that because chat GPT for us always operating at maximum efficiency, right? But if you can see, well, this is what happens if you have fewer parameters. This is what happens if you have more parameters. It starts to give people a little bit of a taste of what's going on. And with the image AIs, you can do the same thing. What happens if you tell it, well, deviate a lot from the original idea. They do an interplay between shape. You tell it, you know, match the shape very closely. Now don't match it as closely. And again, mid-journey has some amazing tools, but there are some things you can do with stability. It's stable diffusion that allow people to see, oh, so that's what's going on under the hood. So that's the type of thing I would look at. A mixture of the pragmatic. What are you going to do with this? What does this enable that you couldn't do before? And this, and finally answer, because let's be realistic, they're the fears. In other words, the fear that I do this and now it's going to become useless or I do this and now it's got, you know, all the students are just going to use chat GPT or whatever, and it's not going to be useful anymore. And gently guide them through, well, how do you rethink this? And, you know, when people ask me, so what would you recommend as references? I would recommend looking, for instance, I know if people are familiar with John Bean and Dan Meltzer's book Engaging Ideas. It's now in the third edition. Dan Meltzer came into the third edition. And one of the things you can do in preparation for something like this type of session is say, okay, so take ideas from engaging ideas, for instance, where writing is concerned and say, how do you take one of these projects and rethink it, remix it, transmogrify it in light of something like a large language model tool so that the heart of the project, the heart of the tool, the heart of the assessment, whatever it may happen to be, the writing project, et cetera, reflects what you wanted to do, always say with critical thinking, except only better if you bring in chat GPT into the mixture. And again, what I'm trying to say is here, don't try to do everything for faculty because you can't. Okay, but you can at least show them a path. You can show them how you take apart, how you take the old engine, if you will. Okay, so let's go back to that metaphor about the car. So there's the place at which you take your old car and you replace its internal combustion engine with a modern electric engine. Yeah, I know it's not easy to do that. Okay, let's take the metaphor a little bit further. And instead of giving everybody their own car, you show them, well, this is how you do this and this is how you would apply the process to get to where you want to go. I like this, I like this. Okay, first of all, Carly, that's a great, great question. And I think Ruben has just given you an agenda to work through. So I want to thank you for that question and Ruben, thank you for that fantastic answer. And just want to make sure the engaging idea is that's John Bean and Dan Meltzer. Yeah, John Bean and Dan Meltzer, it's the third edition. It is worthwhile getting the third edition because there's a couple of new things that have come in that are actually ideally suited. The old editions are still great, don't get me wrong. But the new edition in particular has some of the latest research, some thinking about tools, et cetera, that is particularly well-suited to being transmogrified in light of large language models, for instance. That's very good. I just put a link to the book there in the chat if you want to pursue that. We have a quick video question from Guy Wilson coming to us from Missouri. And hello, Guy. Hi, so basically I'm on the side where I support the instructional technologies. And obviously I think if people saw the Blackboard announcements last week, Canvas was making announcements this morning. They're adding a AI-powered layout tool. They're gonna have an AI-based marketplace for their partners, I guess, for AI stuff. They're bringing Khan Academy's Kanmigo writing coach to Canvas. So we're seeing all of those kinds of things that are gonna be pulled in. And we're already starting to integrate a couple of tools that involve AI. But are you aware of any AI-based open AI tools that are projects that are being developed for LMSs? So that's a great question. The answer is yes. I know that there are people working on this. I don't know anybody that has one ready to roll out. Now, I would obviously look at the Moodle crowd to see who's most likely to be the first out the gate. If I had to make a guess, I'd say Moodle and Canvas would be my two first guesses. Moodle on the Libre open source, Canvas on the commercial just because of these of integration of some bits and pieces. So I am aware that there are projects working on this, but I could not give you a timeline on when the tools will be ready for a rollout. I will say that I suspect it's going to be sooner rather than later. There's been a lot of effort in the Libre community to say, hey, how do we make sure we have hooks to everything? So a simple example, we're seeing right now a whole series of tools that use small models. So you can run them on pretty much any machine, but they use small models to get that specialized fields of inquiry. So we have, for instance, small tools to do things like design experiments for biochemistry. You're a biochemist, you're a research designer, right? You're designing, this is not for a toy, this is actually for designing true research projects, but there's a whole series of things you want to keep in mind, et cetera. There are now some tools that work in specialized domains, and the reason these tools are coming out as quickly as they are is there's been an effort by the community to make sure that all the pieces talk to each other. So I'd expect to see it sooner rather than later, but as I say, I honestly couldn't give you an exact timeline on that. Thank you. Oh, great question. Thank you, Guy. Yeah, very good question. If you're new to the forum, that's an example of a video question. So if you'd like to join us, you don't have to have a beard. Just press the raised hand button down the bottom of the screen. And by the way, a shout out to Vic on Mastodon, who has been live-tuting our session so far. Bravo for doing that. I think that may be the first time we've had a live discussion on Mastodon there, so I'm really glad to see it. And speaking of video questions, we have one from our great friend and AI stalwart, Brent Anders, coming to us from what is probably close on midnight out in Armenia right now. Hello, Brent. Hello, hello. So yeah, this is a great discussion. I love the idea of the rebirth of the humanities through AI. That's definitely an awesome way to look at it. Okay, so here's my question. And this ties a little bit with the previous question. With Moodle, this is the only learning management system that I've found that has a plug-in that recently, they've recently released this, that you can embed chat GPT within your Moodle, but the coolest part of that isn't that. The coolest part is this holy grail aspect that I'm looking for, and hopefully you can give some illumination on this, is the power and ability to have another layer within that process. And this other layer is that the plug-in allows you to put in a file with content. So this would be I'm teaching a class in professional communication. So I put specific information about what do I mean when I say verbal communication? What do I mean when I say nonverbal communication? All these specifics to the topic that I'm teaching. So that's a layer that chat GPT will look at before it answers. So it's not just sort of going into a database of information and then giving an answer. No, it's first looking at what AI as the teacher say this is the correct stuff. So now it's gonna use that to give information to the student. I see that as the holy grail of AI because so many instructors, they want to unleash AI with their students. They want their students to use AI on their own even to help with learning the content, but they want it to be more specific to exactly what they're teaching. So they want to avoid the confusion of an AI giving a different response than what they're pushing. Have you heard of anything like that as far as maybe using a hybrid of something like chat GPT and an open source and then putting it together? Now I realize that in this plugin implementation, it is using an API because you're having to call it and it's costing some money, but what are your thoughts on that? Yeah, I agree with you that that's a hugely powerful use of the tool set. And to answer your question, yes, there's already work on that. So the easiest way to get into that is again, if you look at the links that Brian posted, with the tools, the tool for text an interface for a web interface for the large open source, Libre, large language models has that as a feature. You can feed it in a proper format exactly what you're describing a list of texts and then you can say, well, use this in this way. There are many ways of getting at it. It can go from feeding it in a very fire hose way for lack of a better word where you just take all the text and you say, room, you know, here's a huge set of texts. Okay, go to it. Using these tests, texts as the parameter for your reply to a much more nuanced way in which you're saying, well, use this and this way to reply to this. So you can construct it as fine grained or as as a fire hose as you like. If you need something more sophisticated than what's in there, there's a whole series of Libre tools. Again, Brian, I'll be happy to send you some links because they require a little bit more effort, but not that much more. And again, I'm not aware of any full interfaces yet into Moodle and so on, but we're not far from that. It's, there's no reason it can't be done. It's just a matter of all the bits and pieces hooking together. So you could do this, for instance, saying you're teaching a course in, you know, management and you want to have the course informed by X team cases that you have already in text files or PDF files or whatever format it is. So that that's what the well spring your students are drawing from. And then you might have students say to bring in their own suggestions and add them to the mix and see what happens with that. That's already doable. It's just not fully integrated yet with things like Moodle and so on. And again, this is also one place where the experimentation is important because you'll find that if you use very small amounts of text, you don't get enough richness to make this possible. You need to give it more than a certain amount. Now what that critical amount is, we're still figuring out. It turns out to be less than people fear. At the beginning people were talking, oh my God, we're gonna need thousands upon thousands of texts for everything. So you can do quite well with, you know, a few dozen to, it depends on the field, it depends on what you're trying to do as well. But that is one caveat that what you want to do may require more or fewer texts than what somebody else wants to do. Again, early days, I think there's all going to become fairly systematized within the next year or so. Well, that sounds like a terrific, Brent, thank you for sharing the Holy Grail idea with us. That's a terrific one. And also just another shout out reminder, Brent published a book on AI literacy that I strongly recommend that everyone grab a copy of and please Brent, throw a link in the chat so people can grab it. And again, Ruben, thank you for the excellent, excellent answer in great detail. This is why I wanted to have you here, one reason. We have a bunch of other questions coming in and I want to make sure that everyone gets a chance to raise these. And this is one from a really, really good practical question from Elizabeth. It's, I believe it's Pichella. If it's Pichella, I apologize. And she asks, many college networks have security features that restrict access to AI tools. EG, I can't access Bing channeling campus. What are the risks to the operating system or network when using AI tools and how to mitigate that? That's, again, it's a great question. Obviously there's a question of how your campus policies are set up, right? And there's questions, of course, one of the things about Libra AI that I point out is if you're worried about privacy, nothing better for privacy than to have it on your, on a machine that sits on your desk and it's only accessible from your desk. So, if you absolutely have to limit the network access and so on, you can have this on the machine on your desk. You only need to connect it to the network when you download the models and you download the software or you update it but it can be offline and you can unplug physically. Any drives that say might have information if you were doing this, for instance, on. I do some research on qualitative analysis of interviews and you want to keep those interviews private. You want to make sure they cannot be accessed. Put them on a hard drive, unplug it. If a hard drive isn't plugged in, I don't quite know sort of magical fairies how it's going to get anywhere. So that's the most secure. But as with all of these, I recommend that people consider either running locally. The cost of the machines has come down dramatically. It's not significantly different from a machine you might be using yourself. Obviously, different resources at different locations but to give you an idea, a machine that would have been considered a mid-range machine for gaming a couple of years prior to the pandemic, that will get you just fine with all of the current Libre tools to a useful level. That's why I use myself, actually. It's not particularly fancy or powerful machine but it was inexpensive, relatively speaking, to build. Obviously, if you want to get fancy, you can do that. And then it can go out. So again, if you have a machine, then it's controlled by your local policies It's as good, the security is as good as the security of your network. If you go out to the cloud, you can rent machines online. And there, I have to say, you have to go on a case-by-case basis. Some companies have very strict policies for security. So they, for instance, if they work with medical records, they're going to have, assuming it's a responsible company and this is where I would go to medical schools or hospitals that use their services and check with them to see what their reports are. They're going to have stringent security protocols in place. I've worked with some of these in the past on different projects. And hypothetically, you could always break in but they would have taken a major effort, frankly, to do so. Others are more open and you have to decide. So for instance, if you use Google collab, Google collab has both a free tier and a paid tier. And you can use that to run some small LLM models. But I'll be honest with you, the security ain't great and the privacy ain't great. So I wouldn't use it for anything where security or privacy is a concern. But the short and long of it is you can make it as local as you want at not too high a cost per person to cloud-secured instances to if you want out there, but then of course you are in a less secure environment. Well, thank you for that quick and very, very detailed breakdown and thank you, Elizabeth, for the really, really good question. It's a very, very important aspect of the topic to think through. We have another question from another Brent, Presley, and who asks a very technical question which you may already have pointed us to. Are there any resources for helping someone to fine-tune one of the open source models? Yes, in fact, again, this is one of the sets in one of the resources that Brian shared, the last one in fact, allows you to go far beyond fine-tuning. It allows you to build one from scratch. So if you want to go all the way out, you can do so. And if you look at the resources available linked from that, you have for less ambitious people how you might fine-tune and so on. But here's also what I'm going to recommend using the Wisdom of Crowd, so to speak. I spend a lot of time on Reddit, Discord, and there are a few others, but those are the two major sites, as well as the discussions on GitHub or the repositories for the source themselves. And the community is overall, what I would call a welcoming one. As with all of these communities, there may be the person that loses their temper every now and then, or it's not very nice, but by and large, the community will support questions. And just to give you an example, right now, as I said earlier, Meta released Lama 2 as its second set of quasi or somewhat open source model sets. And there's some questions as to some strange things that are happening with Lama 2. And I'm finding that, frankly, I'm getting faster answers from other people experimenting with this than from Meta. What that Meta is trying to be bad or obstructionist, they're just getting bombarded with questions. And in the meantime, the people on GitHub are saying, hey, I tried this experiment and this worked or this didn't work, or I know it's this bit of code that seems to be causing a problem here. And the bottom line is, I'm seeing a very rapid evolution towards this particular problem that I'm seeing with things not having a long enough scope to answer a question at times, getting solved in the next day or so. So that's the other recommendation I would make. So you have this site and you have resources attached to it, but also I strongly recommend, as I say, Reddit, Discord and GitHub are your go-dos. Other communities, of course, can be as well, but those are the three I found best. And last but not least, take a look at some of the courses that are out there, Coursera for free and some others on YouTube, et cetera. Some others are not quite free, but relatively inexpensive that can also provide some support in this regard. Well, thank you for the great answer. And Brent, what a really good question. We're slacing this problem from multiple directions coming up with a bunch of different aspects, which is really, really good. And then I'm delighted to raise a question from my colleague at Georgetown, William Choi, who is also just a brilliant, brilliant person. And William asks, what open AI is switched from open to closed source releases? They claim they were trying to prevent widespread malicious use. Do you think that open releases can be balanced to a public safety? The short answer is yes. I think in fact, this is a question that has come up many, many times before in the context of operating systems. So when people said, well, when you go to a system like Linux, for instance, for, which has become, frankly, the main underpinning of many of the systems that run the web today, the fact that it's open source, couldn't somebody inject malicious code and so on? The answer is always the same. It's answer, yes. But many, many eyes help keep that from happening. And so there are happen attempts, make no mistake about injecting malicious code, but many perspectives help. And something similar happens here because the other thing is open AI, again, I have a huge amount of respect for the research team, make no mistake. I'm very impressed with what they've achieved, but it still is a relatively small company. And there's a difference between, even a dozen or a couple of dozen people looking at something to say, how can this be made? How can we make sure this doesn't get into trouble versus having thousands upon thousands? And that's what you're seeing right now with the open source delivery models. When I'm asking a question or I'm posing, hey, I'm seeing this problem with Lama too, I'm not seeing something that's the size of the team at open AI, I'm seeing something that's larger as a community than all of the AI software companies taken together, including some people from those software companies who spend time on these community sites to help with them. So we all want this to be usable, we want it to be safe, we want it to be very much accessible to people, but I don't think that that's in any way incompatible, provided that we have this community dynamic moving forward. In other words, one of the things that doesn't work, and we've seen this happen, sometimes it's where you take an open source project and you say, well, now it's only going to be these five privileged people from here on out that get to look at it, so that's not good. That's where suddenly things start going the wrong way. So yes, I think so, and I think, again, the open delivery aspect is essential to this. I think it's also essential, by the way, to understanding where issues can arise. So let me give you an example of this, because this is an important question. One of the questions is, how do you use AI and deal with things such as built-in prejudice or biases, et cetera? And the trouble is if it's closed, I have no way of knowing what went in. But if it's open and liberate, I can actually today take any one of my models and deliberately set up a scenario which gets at the toughest types of bias to eradicate. So the type of thing that is so buried historically that people don't even realize there's a historical bias. The type of thing that has to do, for instance, sadly in the US, with things like the policies of the Woodrow Wilson administration that suddenly stripped black workers of protections, of jobs, et cetera, right? That had a huge impact going forward. But it's not the same as the type of thing that people think about as somebody, well, somebody overtly calling somebody names or overtly. It's something that has an impact moving forward. And it can have a pernicious effect in the sense of saying, well, this then biased how businesses, for instance, gained access to money. This biased how banks, for instance, made loans and made money. So if an AI just looks at this from a naive perspective, it says, oh, well, doing this gives you the best yield, look at the historical records. Yeah, but the historical record was driven by the fact that these events took place during the Woodrow Wilson administration that thereafter cut these people off so that they didn't have access to this. So within what the AI may recommend, there's going to be a bias that's going to be harder to get. Now, if I know what's went into the Libre AI, I can say, aha, I will now construct deliberately explorations that will allow me in turn to say, this is how I construct explorations. This is how I construct queries of the AI that get the AI to say, by the way, this was caused by X, Y, and Z. Therefore, you should look at alpha, beta, gamma, whatever to change this. And again, this is not a trivial question. This is something that's gonna take a lot of work. It's gonna take a lot of work by people working with AIs to decide how you triage this, how you decide what you're looking at. But I keep coming back to this is what Libre AI, this is what open AI makes possible. Because if I can't see what's going in there, if I have no way of getting at what's going on, it's much more difficult to get at this type of question. Transparency is one of the great benefits of open source. Well, thank you for that excellent answer, Ruben. That's a terrific question. And I do wanna make sure that we get some other questions in here. By the way, some of you have asked questions about AI in general, and I'm saving those for the end because the focus right now is on open source. So for everyone else who wants to ask a specifically open source question, we're gonna get you guys first. And here's one right now. This is, oops, yeah, it's coming in from Daniel Shown. The open source initiative is driving a conversation to define what open source AI even is with the aim of providing guidance to key stakeholders. Any thoughts on that, ever? I think it's an important conversation to be having. One of the things I do want to see it though is I'll see it in multiple places. And this is something that goes to one of the aspects I think it's going to be important because I think sometimes when people say, well, let's look at AI and sort of take this monolithic approach, you really need to be having the conversation about what open source AI and how it's gonna be used in different settings in different contexts, you know? I'll be honest with you, I appreciate what the EU has done in terms of trying to create a grand framework for AI, AI, safety, responsibility, et cetera. I'll also be honest with you, I'm not convinced it's the best approach. I think you're going to need a conversation that takes place in multiple places. It's a lot messier, okay? I'm not going to deny that. But I don't think we're going to come down to just one overarching set of principles. Rather, I think we're going to come up with different principles that get different instantiations in different contexts and different conversations that will be necessary to see how to best apply those principles, develop those principles and so on. So again, I don't think it's a question of saying, well, we need this regulatory framework. We do need regulatory frameworks plural, but I don't think they're going to be just, the grand regulatory framework for AI, rather, I think we're going to see ideas about how to regulate instances, uses, contexts for AI in different scenarios, in different contexts, and that in turn, is going to help define the conversation about what open source AI is, what Libre AI is in the first place. Oh, that's great. We're a really good question, and thank you again, Pribin, for this. I think we have questions that kind of straddle the divide between open and proprietary AI. So, Luis, I have one from Chandrika, and this is, let's see, let me try to get this one up here. What are your opinions about AI tools related to instructional design, such as ideaassist.co? Are such tools safe to use? For example, ideaassist is a Chrome extension and open data risk, and that's a new one for me. Actually, I haven't seen that one. Yeah, you've said it yourself. In other words, you have to go on a case-by-case basis. What are the safeguards built into the tool and into the framework that's embedded in the tool? I mean, as I'm sure you know, there are contexts in the EU, for instance, where you cannot use the type of approach that's used by many Google plugins, et cetera, because they do not satisfy EU regulations. There can be issues as well. So, where is the data from the plug-in store relative to this, et cetera? So, it really is a case-by-case basis, and I would recommend looking at it as such. Again, I do keep coming back to the idea that with open source or Libre, it's making no mistake. It's not that I can say, oh, every tool is much more secure, but it's easier for me to at least say, well, I've created an environment within which that tool in particular is secure. And let me give you an example by way of a practice in some of the Libre AI that is not secure. Some of the Libre AI tools say, oh, run this code remotely on a remote server. There, I would say, look at the remote server, look at what it's doing, look at how it's interacting. In some cases, that allows you, yes, to run tools that you could not run locally because it exceeds the computational capacity of your system. But the minute you do that, then you have to look at that carefully and decide, are the trade-offs worth it to you or not? As I said before, I do work with interviews, but if it's an interview that I've conducted and I have privacy guarantees for the interview, we and so on, et cetera, that is not going on any public site where I have any doubts whatsoever. If that's staying on my system. And as I said before, if it's not being used, it's unplugged. Very good, the great air gap. Well, thank you, Chandra for the question. And I forwarded from them a link to that one plugin. We have a couple of more general questions now. Which I know, Ruben, you are more than happy to address. And this is one for a good friend, Mark Corbett Wilson. And this is a more strategic question. Can Ruben give us his thoughts on the weaponization of AI? The MIC, I think that means military industrial complex, is integrated with both corporations and educational institutions. The autonomous weapons are already in the market. So I think he's referring to literal weaponization. Yeah, no, no, I understand. And the answer to that is, this is one where it is crucial that all sectors of society, but most definitely academia, get involved in the conversation. This is not something that you should say, oh, well, let's leave it to the military or to the arms vendors. No, everybody needs to be involved because this is not, let's be clear about this. Some of the uses of these autonomous weapons are very, very scary indeed. I'm not worried about a Skynet scenario. That's not the type of scenario. But the type of scenario where suddenly it becomes easy to say, well, you're just sending in a drone and well, if you no longer have a stake with soldiers on the ground, and if there are a few collateral casualties, oh, well, so that worries me. That's, now, to the good news is there are people in the military and there are people indeed in some of the people that supply weapons to the military that are indeed very much engaged with the idea of, no, there has to be a code of ethics. There have to be codes of ethics around this and so on. But I do want to emphasize this cannot be left just to the military or just to the arms vendors. Once again, this has to involve everybody. So every time I hear that, well, this is too complicated for yourself. Sorry, no, that's not a legitimate answer. Those of us in academia, we are teachers. We are learners. We have a duty to become better teachers and learners. We can explain this and involve people in real conversations. And this would be a wonderful conversation for another day. What type of conversations? How do you scaffold this type of project? There are several deliberative democracy projects that have very much the right type of tool set that if augmented with understanding access to the tool set technology can help inform conversations about this. That would be a very long conversation and I'd say very worthwhile one for another day. But the important thing is it has to happen. It cannot be something that we in academia step back from or that society steps back from and just says, hey, just leave it to professionals. It really has to be a conversation about ethical decisions, ethical uses that has to involve all of the society. And thank you, because that's a crucial question to be asked. What a great exchange for most of you. I'm a big fan of Mark and his thinking and he also notes in chat that this applies to creating a propaganda. And that's the thing I've been experimenting with, with ChatGPT. Thank you for that answer, Rupin. We're almost completely out of time. So I wanted to ask one question looking ahead a bit. What might a college or university look like in just say three or four years? If they really embrace open source AI, what are some of the ways that that might change their operations there? Their research, their day-to-day life, the classroom? That's a great question. I think, I already mentioned one, of course, which is the re-engagement with the task of critical thinking with the humanities, with reading in a deep sense and reading across multiple dimensions, of course books, but also other things, movies, performances, et cetera. But I think there's other aspects that also come into the picture. So one of the things is, Brian, and here I have to thank you for the human duty you've been doing in terms of keeping the issue of climate change and at the front and center. We have hugely challenging questions. So I would like to see those questions become a key component of all conversations. That doesn't mean you abandon everything else. But I don't think we can go any further and say, oh, well, somebody else will figure out climate change or I teach an ex-department and I don't see, no, sorry, we need to be having these, again, as conversations. And I think that the tools can help us do it. The tools won't do it by ourselves. Okay, it's like anything on Samar. It's just straight substitution, doesn't do anything. We're going to need to construct approaches to learning, approaches to thinking. But to your question, I would hope that a college or university that truly leverages the opportunities for AI is one that becomes, if anything, more deeply engaged with the questions of learning that engages all of its communities, that engages the surrounding community because that's another aspect. We have huge resources, but so frequently, we always speak of the town down divide and it's not always a fair statement, et cetera. But nonetheless, we need to be talking about how these colleges, how these universities interact with the communities around them or the interact with the politics of the world they inhabit. And again, will these tools do it for you? Not in the least. If misused, could these tools make things worse? Yes, of course. But the key thing is if used correctly, if engaged with in these types of aspects that I'm talking about, I think it could give a new relevance when people speak about declining enrollment in academia, et cetera. Well, I think the best way to address that, frankly, is by saying how do you make academia more relevant, more engaged, more something that somebody says, yes, I want to be there, I want to engage with it, or even if I'm not a part of that community, I won't be talking with that community because they are using these tools to make things happen, to take care of issues of scaling, for instance, scalability that were issues of the past. There are many that you can start to address, as I said, to get into some of these deep aspects to help communities that did not have access to some understanding, scaffold that understanding using some of these tools. So that's a short portion. That's a great version. And I'm afraid that it is last version we can offer this hour because we have run out of time. Ribbon, thank you for being just a fantastic, fantastic guest. You have given us so much to think about, so much to work on, both very practical hands-on tips, as well as guidance to how to think about all of this. What's the best way to keep up with you these days? Are you putting your online efforts to Twitter or to Facebook or LinkedIn? Right now I'm switching more and more to LinkedIn. I don't know, I really don't like the idea of axing. It sounds like I'm doing something terrible to the people I'm talking with. But there's also a question of whether Twitter will continue to retain some of the aspects that made it so appealing at one point in time as a public space and agrar where people could talk with each other in interesting ways and so on. And at this point I found LinkedIn is a good space. I'm also experimenting with some of the other social platforms. But for now let's say it's LinkedIn and I'm still keeping a toe in the water in Twitter in the hope that it might still pull back from the brink of complete the solution. Before it exits out. That's the best way, yeah. Well, thank you. Thank you, thank you so much. We have all kinds of questions people have for you. We're gonna bring you back, of course. But in the meantime, thank you so much. Thank you Brian, it's been a pleasure. Now, if you'd like to keep talking about these issues we can please do this as said on a few different forums. Twitter or the platform formerly known as Twitter or MasterDone. Just please use the hashtag F-T-T-E and here you can see my logins and my handles there as well as on my blog, brynellxanner.org. If you'd like to look back into our previous sessions, in fact here, let me just jump ahead a little bit. If you'd like, you can take a look at tinyurl.com slash FTF archive. You can look back at our previous sessions. Now we've got a dozen on AI. If you wanna look ahead, we have more sessions on AI coming as well as other topics. Just go to forum, thefutureofeducation.us. If you'd like to explore this further on my sub-stack, just head to aiandacademia.substack.com and love to see what you think. And as we do all of this, as we end every session, let me wish everybody well. I hope those of you in the Northern Hemisphere are not too cooked with the heat dome that seems to be settling in different parts of our planet. I hope those of you in the Southern Hemisphere are enjoying yourselves in your cooler seasons. Above all, I hope everybody is safe and sound as you prepare for the fall classes and everything that we should anticipate from there. Thank you all so much for participating in a great session. We'll talk to you soon next time. See you online. Bye-bye.