 Good afternoon, everyone. Have you had enough coffee and food and everything, water stretched? As he mentioned, my name is James Wiley. We are going to talk about AI, kind of what now, what next. But before I start, the big burning question I'm sure you all have in your mind is, yes, I do always overdress. I'm convinced that I was born in a tuxedo, which is very important for me, because it was a great day. So I was born in a tuxedo. So I always overdress. So please excuse that. It's not like I'm excessively formal. What we're going to do today is going to dream a little together. It's going to be a little bit different with this discussion. It's going to be less about the specifics of AI and more about what AI can do for us in the long run, in terms of education and the workforce, and also what paradigmatic shifts are coming up. What do we need to pay attention to? And I'm going to call out a term called a foundation model that some of you may have heard. And that's going to be at the end. I have very many slides. I've got 18. So I hope there'll be some discussion after it. But remember, we're going to talk more broadly about AI, the student journey, et cetera. So we basically have four big sections. I'll talk about the introduction, about who we are in this tech. There'll just be one slide. I'm not looking to sell anything to anyone. Just want to give you some context about who we are. Then I'm going to talk a little bit about the timeline of AI, just to put it in context. About the three big portions of AI we see artificial intelligence, machine learning, and deep learning, a subset of which will be what we now know as generative AI, et cetera. And then talk about the current state of AI. And the data I'll show here is primarily US data. But it still gives us some insight into what people are doing when it comes to implementing solutions with AI in higher education. And then here comes the dream part. This is when we start talking about the future, what it looks like going forward, et cetera. So who are we? Basically, we're a research firm. We collect data around, and here are the different data points that we collect, and we track them. And we try to draw inferences and connections and patterns between these data points to kind of further understand data, not just for higher ed, both for K-12, and not just for North America, but internationally. So that's essentially what we do. We kind of look at all of this data, all of these points you see here, and we say, what does it all mean? Are there any interesting things we know? Like, what are the key drivers for implementation? What are the associations? One example is, what are the tech stacks that people are deploying? Why are they choosing Moodle and not Moodle? Those are the kind of questions we look to answer looking at our data. So that's it, sales pitch done. We're all set. So AI right now, discussion of AI, is probably the worst drinking game in the world right now, right? If you took a shot every time someone said AI, you'd be in the hospital in about a half hour, right? So just to put some context about what I mean when I'm talking about this, right? There are three big things that started in the 50s with Alan Turing, Cambridge, and it began just artificial intelligence mimicking humans so that a human couldn't tell a difference between a human and a machine. That was the original version. Then it deepened to what we call now machine learning, well it's got a lot of algorithms that help us predict. Deep learning, which plays neural networks, large language models, et cetera, to undercover patterns. And a subset of that, primarily here would be the now almost infamous chat GBT, right? We still have a long way to talk about chat GBT. I usually judge the measure in which a technology term goes out of style, whether my mom's asking me about it. So she's not asking me about this yet. She's still worried about Alexa and Siri. So we've got some time here. We probably got another 10 years while we talk about chat GBT, but essentially this is what we're talking about right now. But to be clear, sorry, this is the panic people have right now, right? When you talk about chat GBT and everything, everyone's like, oh God, the horror. I was talking to an institution not too long ago and all the faculty members were worried about being supplanted, right? The idea with AI, chat GBT particularly, wasn't gonna supplement them. It was gonna supplant them, right? We have other problems around chat GBT, worries about bias, misinformation, et cetera. We have concerns about it at the national level, global level as well. But the overall state right now, I think, is one of anxiety and terror, right? There's some people getting beyond it. I know some of you are, most of you are, not all of you are, but right now when I talk to people in the field, they're still here. They're still worried about this, but what does it mean, et cetera. But the thing is that AI has been around for a while. If we look at our data and we show that people have been implementing AI solutions, particularly around machine learning for some time, they've been implementing solutions not involving AI at a faster rate, right? That's the dark blue, I mean the bar at the top. But they're primarily working about machine learning. There's a lot of stuff going on machine learning. Some of it, when you talk to vendors, is really not machine learning. It's just fancy algorithms. It's not learning in any way whatsoever. It's just great math. But there are, in some cases, machine learning. And here are the big key use cases, right? There are five that are happening in the US right now. First starting from the top is admissions and enrollment. How can I improve my enrollment funnel? How can I target students better? How do I know which students are the best fit, best match? That's one use case. The next is advising. How do I identify at-risk students and target interventions and other activities to prevent them from dropping out? Then this retention, what doesn't go through the advisor, is just someone else says, here's the data. You gotta talk to the student, et cetera. Let's go. Incessant integrity, which I call, which is proctoring, and you see some of our colleagues out there talking about that. That's a big use case right now. And then teaching and learning. How can I improve teaching and learning? And that's what we've been discussing here, right? Yesterday, today, and we'll do tomorrow. How can AI be used to improve cost content, searches, et cetera? These are the big five use cases right now. There are others that are coming into play. How to identify alumni for enrollment, things like that. But here are the big five that we're seeing in higher education, particularly in the U.S. right now. So what's the future state look like? This is where we dreamed a little. So this thing here on the left, I'll dive into more, it's called a foundation model. The term actually comes out of Stanford, Stanford University in California in the U.S., and the idea is that, well, we have at the core of this model an AI engine that can ingest, taking different sources of data, right? Taking texts, images, et cetera, and it can adapt, interpret and adapt for downstream tasks, right? No downstream task here, case, would be question and answering, image capturing, et cetera. We talk a lot when we talk about AI about those downstream tasks, right? That's what we talk about, AI in this particular area, AI in this particular area, in this particular area. But what Stanford and others are arguing is that that kind of misses the forest for the trees. Really, this is a paradigmatic shift that's happening. This is for the first time in history, we have a model that can take in almost unlimited types of data, unlimited amounts of it and analyze it, interpret it and deliver it to go for downstream tasks. Now, there are four things that use to characterize this model. Well, two, I'll add two. The first is that it's emergence, meaning, and this is kind of scary, given the power of the model in the middle, the power of the AI, and it's essentially largely a black box, uses emerge as we use it, right? We have data, we're like, oh, wow, that's interesting. Didn't know I could do that with it, right? That's kind of scary, because we don't have technology to do that. Usually we build technology for purpose, we might accidentally find it does something else, but this model, this approach says, emergence is key, right? It's going to happen. The second is homogenization. What's going to happen is you're gonna have a lot of vendors building something to support this, and you'll start using more and more similar models to produce these downstream tasks. And what we're seeing right now in the industry, if you look at IBM, if you look at Microsoft's 10 billion US dollar investment in open AI, AWS, they're building platforms to support this model. That's what they're doing. They're not building for the downstream task specifically, they're building for that, right? Third thing is that convergence. What's going to happen is that a lot of technologies that are built on these will begin to share functionality, right? If you have a business intelligence solution, analytics solution, and you have an LMS, you're like, and they're both built on this, just put the analytics functionality into the LMS. I have two solutions, just have one, right? So I'm gonna converge. And the last, which is important for everyone here, is that it democratizes it. This is a technology that allows you to build on it. You can build on this stuff right now. Low code, no code, great code. You can build apps on this right now. It opens up technology, it opens up AI for anyone to build. Right now I'm sure there's some kid related to my, one of my son's friends is probably thinking, got it. Let me just see their building stuff on something right now, you can build it. So emergence, you learn as it goes along, you go, aha, didn't know that. Homogenization models become similar over time. Convergence technology begins to share functionality and some technology, some products may go away. And then last democratization, you can do it, open it up, make sure it's. So this is an example of what they said, what Stanford wrote about in education specifically. So it has on the left, the different teaching materials, interactions, things we're learning about how students interact with things, subject matter, content, et cetera. It trains on that and it produces, will they argue, four, five, sorry, big things. Understanding the student, which we're learning a lot with learning analytics, we're learning that kind of stuff. Understanding educators, like what are their strength weaknesses through course evaluations, et cetera. Understanding the progression of learning, how students are doing in the classroom. Understanding actually the teaching. I was a faculty member adjunct for three years. I have literally no idea how well I did. I could have been horrible or not, my evaluations were great, but that's probably because I was the same age as the students and they felt bad for me. I really don't think I was that great of a teacher, but no one would know. This is the kind of data you would put in and you would begin to say, okay, let's align cross-evaluation data with student performance and what does that tell us, right? And then the last is kind of understanding the subject matter a little bit more. Allowing us to take this content and draw parallels and connections between different types of content to make that visible to the student as she progresses on the student journey. So these are the five areas that they argue the big tasks and goals and I'm gonna add some. The first is personalization. I was talking to a vendor once about personalization. I was like, does your LMS, it's not moving, by the way, it's one of the competitors. I said, does your LMS offer personalization? He said, yes, absolutely, a student logs on, her name shows up, remembers her preferences, et cetera. I'm like, that's not personalization, right? That's nice, nice to have, but it's not personalization. That's glorified mail merge, right? It's not great. Personalization is characterized by one thing called bidirectionality, meaning on the left is the person. That person is searching, navigating, accessing information. On the right is the solution. That solution is understanding, recommending, adjusting, identifying, suggesting. If you don't have both of these, and in the middle is the interaction point. If you don't have these, you don't have personalization. You simply don't. AI can help us on the right. It can power the right-hand side of this. It can actually help us understand more. What searches, what recommendations, what do you really want? What's the pathway? You could provide more adaptivity over time. This one here is also the journey, right? And this pertains also to the workforce, okay? We see this, you know, we think that higher education is a single trajectory, right? We think you enroll, you go through, drink too much, get your major, graduate, done. But really it's more like a highway with pitfalls and traffic jams and roadblocks along the way. You need to adjust along the way. We don't really have structure in place to do this. If you think of transfer students, for example, it's challenging. Adult students coming back into the education market, they need to kind of figure this out. They need to know, how do I understand my decision? How do I design it? What else, what information do I need to add, augment to it? What's the best outcome, right? What's the what-if analysis, right? What's the best outcome in terms of career, course, money, et cetera? What's the pathway to get me to my end goal of being, let's say a doctor from where I start? What's the pathway to get me there? And then eventually along the way, auditing my decision to see if I'm making the right steps. Surely an AI-powered foundation model can actually help with this, right? Because what he's gonna understand is, it's gonna weigh all the pros and cons for you, it's gonna get all the content, make that visible to you, and help you make the right decision going forward. So I see this as a big area, where AI can actually, another big area, where AI can actually help, but this foundation model can actually help. Engagement, this is a big thing people talk about with digital content right now. It's around, we assume that it's kind of, I build the content, they will come and they will be engaged and ta-da, that's it. But you know, and then we track engagement, usually in terms of usage stats, did they log on to the LMS, did they not, et cetera. But in between that, would you not want to know whether the student is actually clicking through, actually engaged, scrolling, et cetera. How do we improve engagement? Because studies have shown, and this is one of them here on the screen, that engagement leads to belonging. As we know from the 50s to Tinto, it belonging leads through, leads to retention. There's a link there, right? If a student's more engaged with the university there's a hole, particularly with content that student is more likely to persist and go through. And also to achieve and have better success. So understanding engagement is the third. So again, I said I didn't have many slides, I just wanted to open up for discussion, but just to sum up, yes, there's a lot of panic and discussion around AI right now, but it's been with us for a while. It's been with us for quite a while. Primary use cases, admissions, teaching, learning, advising, retention, and proctoring, cheating, et cetera, assessment, integrity. I cannot emphasize enough the foundation model that I think we can talk all day, and we do, about the unique types of AI, tools that are out there, and the features and functions. But the model, the foundation model, which is coming, right, is key, because it's gonna change the way AI works in our world. It's gonna basically say, give me anything, right? And I'm gonna analyze it, interpret it, and you pick the downstream task, knock yourself out, right? You pick an app, knock yourself out. Microsoft is probably closest to this right now, with its architecture, built on Azure. That's not a plug. I wouldn't mind if they paid me for it, but that hurts my integrity. My wallet is more hurt. But the thing is, Microsoft is closest, Google is getting there, IBM is getting there, AWS is getting there, they're building something to support that very type of foundation model. And again, it's homogenous, it's gonna homogenize the models. It's gonna teach us things as we go along, we didn't know what it does. It's gonna allow us to converge technologies, and more importantly, because I know a lot of you like to, you know, I was at Moodle Community, not too long ago, a lot of you like to kind of configure and extend and work on things, and this is gonna help, it democratizes all of it. Basically, it's gonna be open source, you go. So with that, I'll just stop and open up for questions. No questions? Was that that convincing? That can't be right. I mean, that cannot be the way. Or I was that confusing and you're still trying to work through it. You mentioned four big companies that were working on the foundation model. Is it more important to select the right company to work with, when you mentioned Microsoft, Google and all that? Is that gonna be a game changer for which company you're associated with? It's an interesting question. I think so they're building the infrastructure to support a foundation model. That's what they're doing. The great question will be, what's their differentiator? And right now it's unclear. Microsoft is differentiated only because of its close relationship with open AI. That's pretty much it. Right now. Others are still playing a little catch up. AWS is there as well. I think the big differentiator will be, how easy it will be for you to build stuff on it, to build apps. And also, whether it's free or not, I mean that's gonna be another big key that's there. The big players are aiming right now for vendors to build on this, but you can do it right now. That's what they're doing. So the differentiation will probably be, who's ahead in the game right now? Who's got the more powerful infrastructure to support a foundation model? And the second will be, well what can I do with it? And the third, how much does it cost? If anyone tried to use Amazon, AWS, the pricing model requires an advanced degree in calculus. Cause you're like, I have to figure out how much I'm using, which service I want, how many cores per server. It's kind of insane, right? So just give me the price, right? Just give me the price. He's nodding. You felt it before. Just give me the price. And I think they're gonna come up with a pretty much a flat fee to kind of allow you to use it. So it's gonna be something along the lines of Google Colab, which is basically an IDE that allows you to code in it. So in your talk, you're mentioning a lot of positive predictions for the future, right? But a lot of this initial sort of panic that everyone is experiencing is because most of what everyone is seeing is this is gonna end society in the way that we know it. And so sort of, what are your, you're making a lot of positive predictions. Do you have negative ones? And what are they? I do. I put them in a different version of the slide and I took them out because I didn't want to freak everyone out. I think there are basically three. I think one is the fact that we have to realize that generative API is a GEPTO AI. I think started, stopped in 2021 and doesn't really know anything after that. I think that was the last year of the data ingested. We're gonna run out of data for it to ingest, right? So one big fear is whether the technologists will begin to just feed it AI generated data. So basically just doing that. This will be just learning from something that's already produced. That can have negative consequences. Because all the biases and problems inherent in what it originally produced, they just repeat it later on. So that's one. Two, I do think the misinformation and bias is a big issue. But I do worry, here's the biggest one is that when you, I was watching an interview with Sam Berkman, I'm not from OpenAI. And he's like, a lot of us don't even know what's in this thing, right? So if you wanted to have your fears calm, you're like, hey, tell me what is in it? What are the inputs? What are the weights? How are you doing this? They can't. If you look at some of the machine learning models that some vendors out there use to predict retention, they don't know what's in it, right? You know, they can't tell me I've been doing this for 20-some odd years. I've asked them, tell me what's your model? And they can't tell you. I can't even give you the broad outlines of it. Like whether, so I'm like, so how do you know that you're not producing false positive or false negatives? How do you know that? You don't know if your AI machine learning engine is producing African-American males or a subject of dropping out for 10 years in a row. You're like, is that fine? I mean, that might be true, but doesn't it raise a flag? So I think the running out of data to adjust is gonna be a big thing, but we're not quite there yet, but we might be soon. The second is gonna be the bias and misinformation is big. And the third will be, when an institution says, many of you, says to a vendor say, hey, I got a bright new shiny AI tool, and you say, hey, what does it do when they tell you, you've all seen the slides, right? The slides and it's awesome. It's a drop down that you can read this stuff and predicting. And you're like, so how does it do that? They just shrug their shoulders. Most famously, I talked to, I remember IBM a few years ago about Watson and we've all seen the wonderful commercials about Watson. And I said, tell me what Watson does. They're like, it's artificial intelligence. I'm like, got it. What does it do? They're like, it's artificial intelligence. You're like, where are we going now? The idea of opening it up a little bit to at least give all of you some insight into what's going on, I don't think they can do yet. They couldn't do machine learning and it's gonna be much, much harder with Jack GPT and with the foundation. So those are the three fears I'd have. Thank you. Hope I didn't freak you out though. But thank you for calling me on it. You were like, hey, stop cheerleading. Well, thanks for dreaming with me for a little bit here. I appreciate it. I know it was just a little bit different than a person who did a great job before and some of the other things here as well in terms of more precise action steps about mood only, et cetera. This is a little bit more philosophical. That used to be a philosophy major and a little bit more kind of insane. But I appreciate it and thank you very much for your help today. Thank you James.