 How do we live well in a world of AI-generated content and conversations? Welcome back to this episode of Anabaptist Perspectives podcast. I'm here with Kyle Stahlsfuus. We've interviewed you a number of times on the podcast. You're a teacher up at Faith Builders, and we also have Marlon Summers here, who is part of the team here at Anabaptist Perspectives. And today is maybe a bit of a different type of episode. We're normally talking about church history or theology or things like that. This time, we're doing a bit of a roundtable on the topic of artificial intelligence. And to just jump right in as we get into this here, what is AI, artificial intelligence, and how did we get here? Why does it matter? So whichever of you would like to start there? AI can drive your car. I mean, AI is a capacity emerging in information technologies that allows it to do actions that we think of as distinctly human actions, things like driving cars, or seeming to comprehend and work with human language, things like that. But there's forms of artificial intelligence all around us. It's a whole category of information technology. What do you think, Marlon? Yeah, starting with definitions. I mean, the first thing to do is to question the term intelligence, has become a fad. Machine learning is probably more accurate. But yeah, a class of computer programs and so on that are able to go well beyond any kind of human rules and strategies. So it's not based on human programmers putting chess strategies into the computer. It's based on the model being able to try things and see if they work and self-adjust until it hones in on things that work a lot better. It has become a lot more powerful recently. But it's been here for a long time. Okay, so is that why it's suddenly relevant and we're all hearing about this? Because there's one of the questions I have. I mean, this is a field of computer science that has been talked about, worked on for decades, all the way back to, I don't know, it depends on who you ask. But you know, 1950s, 1960s, whatever. Why is it suddenly like everybody's talking about this? You're hearing statements like, it's going to bring the end of the world or it's going to save everybody. There's a lot of extreme statements being made. Why now? What changed? And there have been a lot of AI moments in the past where it's like, we've arrived. We've got this. But there's something different about some of the artificial intelligences or machine learning algorithms that are available to us now. And it's like Marlon saying, it's a kind of, it seems like their capacity to experiment and to promote their own learning where they start with certain ways of trying something and then they execute and then they learn and then they adapt and they try again and they can iterate and kind of grow in their capacity both to create, it feels like, but also to vary their, kind of adjust their approach to things. And in that way it feels intelligent, at least to us. I would push. I mean, there is though the very real sense in which a lot of more advanced AI models still do depend on human training and training models. That's transparent, especially to Western users. We don't see it. But there's training farms and there's people who work in the whole sector of the industry that's open. We don't know much about it because it's very carefully guarded as an industry secret and it's kind of low paying jobs over and developing cultures that do the training for us. But there is still human involvement on the training side, but it feels automatic. Right. But again, the human training is not just, it's not programming the computers. That's right. I mean, you keep developing models, but the human training is labeling images and then the model sees whether it fit. The model has to adjust itself so that it gives the same answers on these images as people did, trying to copy those answers. Is this all about just copying human behavior or like what are computers even doing then? Exactly. Like computers, historically, you would just give it this input and it would produce this output as a very, you would type in and you would print out whatever it started as text-based maybe or very, very, very simple. And now it's like, I can go to chat GPT and it'll write like a whole book for me or something instantly, which is kind of insane. That kind of throws people and it feels like we're at this real big tipping point where people are putting some pretty extreme labels on it as being way different than a standard computer model. In like its capacity, like you're saying too, it feels like originate content. Yeah. Not just to take inputs and then produce outputs, which it still kind of is, but it's originating things. It's putting its own language to it. It's adapting. It's appreciating context and it can even be ironic or flexible in ways that we haven't really seen computers do before. Even juvenile. It's less like the authoritative professor and more like the student who's trying to weedle his way into an A. Anyway, just not used to computers acting that way. Oh, that's really good. I hadn't thought of it like that, but that makes a lot of sense. Yeah. Well, and that whole turn from just analyzing things to generating things and especially generating an essay, generating an image, generating a video, I think that, I mean, chat GPT is what put it on the radar in a big way and now everybody else is like, seems like that's where AI became the rage like everybody's competing, bragging about, you know, roll it out, put it into Microsoft Bing, put it in our product. Yeah. And in a way, the underlying technology, the basic technology was used everywhere long before chat GPT made the big splash, but that seems to have forced the cultural moment. So that generative AI is maybe what we're talking about mostly. Yeah. And that how well it imitates a human. I think is what scares people, which opens a whole interesting, more of a philosophical slant. So we were doing a bit more historical, the technology and so forth. But let's look at what are the main issues at stake here when we're thinking of terms like intelligence. That's the term that's used for this thing. It's intelligent. There are some people that say, Oh, it's, it's alive. Like it's like, whatever you know, all these weird buzzwords that go around and it's fully intelligent. It's like we are. And then other people like, No, that's, it's not anywhere close. Again, what do we mean when we use terms like intelligence? I want to hear Marlon talk about this. You alluded to it. So I went back and did not get a chance to thoroughly digest, but read a whole chunk of the paper that the statement you mentioned was based on and they're surveying all kinds of potential, what they call AI risks. And they say right in the beginning, you know, some of these are mutually incompatible scenarios. We're trying to lay out different dangers we see in all sorts of directions. So they did mention, they did mention the people who think that, you know, AI is intelligence is the next stage of evolution is going to replace humanity. But the paper was not actually really pushing in that direction. They talked about those people. They mentioned one form of AI risk as the risk that people would get attached to AI models and believe they have personality and then force us to protect them because there's this whole thing already of AI chatbots as friends and romantic partners and all of that. So it was interesting. These scientists were labeling one of the risks is that we can't deal with AI because these people who are subscribing to AI romantic partners will insist that we grant the personhood to these models. Wow. And won't let us, won't let us control them. Won't let us treat them like machines. Wow. Yeah. That's a new thought for me. Yeah. No, unfortunately, that's is a reality in terms of how those things get used. But isn't that comes back to how we think about intelligence? Like I was listening to a podcast by one of the engineers at Google who helped develop the early models that are now being used for, I think that was the Google one barred, I think is what it's called. And he left over some of the years ago because he was worried about some of these risks. And at that point, this is like eight months ago or whatever he said, you know, it had an IQ level of, you know, about Einstein, I think he was referring to the open open AI models. And so he's like, well, if it's already as smart as Einstein, that means it's smarter than most of us already. And it's going to take over the world because it'd be so much smarter than we are. And it seemed for him that that's all there was to it. It's just he was comparing it to human intelligence as if they were on an equal or even playing field. That seemed to me a bit, there's got to be more to it. Maybe we're getting into what does it mean to be a human versus to be a computer? Yeah, I'm not even sure how to even phrase that because all this is such an almost a bizarre conversation. But I think it's a very important one for us to think about as Christians, how do we look at that? I don't think it's bizarre. I think it's enormously relevant and important. And in some ways, predictable just because of how adjacent it is to already existing assumptions about what intelligence is, what it means to know something, what it means to have in the biblical sense like what is knowledge? How does that grow out of experience? And how does that relate to places where artificial intelligence tends to do well, which is relating to massive volumes of information? That's the kind of intelligence they do very well at is looking at huge volumes of content or information relating to that to some kind of new context and then taking that massive trove of information and kind of translating or responding to that context that the user sets in place for it where it says, I want to learn how to change the transmission in my kia or something like that and it can help you do that. That's a very specific kind of intelligence. There's other kinds of intelligences that maybe we haven't explored adequately enough and we're pretty focused in Western cultures on certain kinds of intelligence, at least Silicon Valley is. And there's where these artificial intelligences do very well, but it's limited. There's certain other things that it's not quite as competent. Yeah, but to be devil's advocate here, the AI engineers or whatever would say, well, yeah, but this is just the beginning. Wait till it's indistinguishable from a human and we put it inside a humanoid robot and you won't ever be able to tell. It could be that good. It won't be a specific model for a specific thing. It'll be so much smarter than you and so much more capable than you as a human. And they're just like, wait and see. Just wait and think where we'll be in five years or whatever. I've heard that from a lot of people. Is there any truth to that? What are they missing there? They seem to think that what we can make a computer as good as a human as far as intelligence goes. We're exceeding human. Transhuman would be the word. Yeah. And maybe that's a bit of an extreme philosophy, but I don't know. It keeps popping up. And I guess it does keep coming back to what is intelligence and what is being a human, basically. Yeah. And I don't have a lot of comment there except to say that I've seen people arrive at different conclusions on like how can we create with the kind of matter that we have to work is the limitations of our own intelligence. Can we create something that truly exceeds that? Again, I could quickly get on my wheelhouse there except to say that's not uncontested. And what we have now is devices and models which can be very competent at specific kinds of intelligences. It's a little hard to know exactly what the benefit would be just trying to recreate our own experience when we have that ourselves already. That's a good point. At what point do you introduce like the angst model or something like that? Now I'm going to give you the experience of terror AI and see how you process that. Yeah. Okay. Yeah. What do you think, Marlon? I mean, one, I think it's fair that we're most likely to see explosions of capability and that there's going to be a lot of, yeah, could be very real challenges with how to do well with those explosions of capabilities. And I think the fears of what those capabilities can do are not, I don't think they're unfounded. I think some of that is hyped up. And that's actually why it kind of struck me reading that paper is they were not basing it on those kinds of crazy views of these will be human intelligences. They were more rooted in more practical, functional, more credible kinds of concerns in terms of what can happen with game theory and models and so on. So I'm not saying that we don't have major risks and things like that to deal with. But the claims of saying this is intelligent, I don't know. They strike me as coming out of the same kind of philosophy of mine that thinks that human mental life is to be explained simply by brain as a computer. And that just seems like a, well, it's been contested, discussed in philosophy of mind and biology or whatever, just seems like a very implausible place to start. Like, how do you explain human intelligence as nothing more than a computer? But there's a lot of people who have been thinking of a human brain as just a fancy computer for a long time. So if that's your starting point, then of course you'll expect the computer to do the same thing. But wouldn't that be kind of your, I'm not a philosopher. So I'm not familiar with the terminology, but your materialist worldview, there is no supernatural at all. Just dead matter basically. So intelligence is just a, it's just an evolutionary process. So of course we could replicate it in a computer. So the human experience is nothing unique really. Is that where it's coming from? Broadly, I don't know that, I don't know that all materialists would have to go there. Okay. But yeah, it is common. And it's just like, well, the mind, you know, the brain is something that performs a certain kind of logic and function, functionalism was one term, you know, the mind functions in a certain way. And if you can replicate that function in some other matter, in other words, your brain runs programs. So if you can run the programs in something else, there's no difference. It's hardware agnostic, you might say. Oh, wow. Okay. Yeah. So functionalism may be a better term to use then. Basically like, we can replicate the function. So bingo, this computer is the same as your brain basically. It's just running on a different hardware platform. Yeah. And again, it's not that all materialists or physicalists, as they would prefer to say now, are necessarily going to buy quite that line. But it does seem like that's where this idea comes from. Yeah, this is just the same as our intelligence. That makes a lot of sense. It also seems like a very narrow way of looking at the world, too. I would at least suggest maybe that at the level of us as consumers, the experience that we have of things like chat GBT is pretty limited compared to how these are going to be used in industrial or commercial applications. And there's where there's a lot of the value and a lot of the motivation for large language learning models, say, comes more from commercial, industrial things. We kind of get the dregs of it in some ways. But that's where I tend to think about this. One, what we have now is a pretty limited interaction with a large language learning model. What we could see next, say, would be the kind of intelligence that's capable of not only helping us to make informed decisions, say, about where we want to go on vacation, but it could plan the trip for us, buy our tickets, make the reservations, and then just hand us an itinerary. And that's a step further. It's not just a language model. It's now an AI agent, somebody who's making decisions on your behalf. And it's another form or maybe just a further step that we could plausibly be seeing here fairly. Well, it's actually already available. It's accessible. You can do this. There's where it'll be. Some of these questions will get even more interesting. I think it's not just about writing your essays now. It's about planning your vacation or buying your next car for you. Taking the inputs of what it knows about you already from ads and whatnot, and then just going ahead and making a purchase. And there's where I think the questions will get even more interesting and perplexing, maybe. What are you thinking? Yeah, but there again, what kind of a philosophy of mind are you starting with? And how are you defining agency? And again, these have been these huge philosophical debates and they talk about the problem of free will and is free will compatible with determinism or so on? And I guess you could ask whether these models are deterministic or not. But there's philosophical attempts over and over again to have what they call a compatibilist theory of free will where you still freely made decisions, not that you could have done anything else. It's just because it was constrained by the workings of your mind rather than somebody else forcing you to do it. That's a new thought. Yeah, if you've already gotten on board with the human brain is a causal process and it's you know, fully determined by the laws of physics and the initial conditions of the big bang or whatever as it's working its way forward. Well then, okay, agency is something that just kind of happens as a result of physics and evolution and everything else. And so yeah, you can call these AI models agents and they can do what we do. If you can't reduce the human agency to that, then we're, well, we've got a little more hope of how we think about things. Yeah, so you're going after the term AI agent there. The agent, right. Yeah, is it actually an agent? Right, an agent. Was it an agent in the same sense that you're an agent? Okay, sorry, I got to mess with your philosophical categories there a little bit because then at that level, like what difference does it make? Because if to say it's not an agent, but it is extending or even replacing my agency as a human, is that right? I mean, if it's making decisions for you, is that what you're saying? That's what I'm feeling there. Yeah, it's making the decision on your behalf. So now, yeah, exactly. I was kind of wondering the same thing. It may not be truly agency, but it's functionally agency. Well, functionalism. I mean, you're just a victim of your surroundings, like when a tree falls on you. You can't, I mean, potentially, you can't overcome the force of this thing. Of AI. That's what people are, one of the things that people are worried about. Well, exactly. And I wanted to grab on that a little bit because that's a lot of the concerns. It's like, well, when we start letting these things make decisions, we don't even know how they're making all these decisions and the models are so complicated, and they're getting really worried about that because they're coming back to, well, maybe the human brain is just the result of natural processes and it just kind of formed up. So what happens when we unleash AI and let it start doing decisions and on your behalf or whatever? What if it makes some really dumb decisions and messes up something or being a little bit vague? But it's part of your whole thing. I've like, well, yeah, the human brain is just the result of a big bang and a bunch of natural processes anyway. So why wouldn't a computer be like that? And that could be scary. So it's not that this isn't scary. I mean, so just like the different risks. So one thing talked about in that paper, which seems like a very incredible thing to worry about, is automated warfare. And especially when you have detection systems and you have an AI model that's designed to respond instantly if there's a missile headed toward your country, maybe to take the missile down or the biggest worry is combine it with nuclear warfare and you have an AI model that can pull the trigger on a retaliation strike, perhaps on a glitch or a misreading and that kind of escalation. And that's not even so much the worry about AI becoming conscious and becoming a malicious agent against it as it is just setting up controls and letting them run automatically and producing disaster. But I mean, they do talk about, they do bring up the scenarios like, okay, well, could a model, because it's given a goal, could its goals change? Oh, and then that's where people start thinking about consciousness. If you have a model and its goal keeps changing and it starts to derive satisfaction from doing certain things. Satisfaction, what would satisfaction look like to your computer? Well, I'm so confused, oh my. I'm not particularly worried about it having satisfaction in a psychological sense. It's just kind of being absorbed in its own executive function. Maybe I'm diverting this slightly, but there's a phrase that's being used by some people saying, speaking of an AI model, we get what we asked for, but not what we wanted. So you turn a model loose and say, do this thing or fix this problem, and the solution it comes up with is not at all what you wanted. It's kind of the robot making paper clips. Do you want to maybe explain that? Of course, that's delightful. It kind of fits with this, and I've thought of that as we were saying as we give this thing decision as it makes decisions. It's a possibility. The AI, the ones that get the most attention are the ones that would be in charge of or exercise some control over what you consider critical infrastructures or critical capacities like warfare. You're going to be really cautious with any kind of AI agent that has some control over your nuclear arsenal, access to it even. There, you're going to be really rigid and careful with it and selective with policy and whatnot. But there are some things that it's the stuff that comes from the margins that tends to catch you. And it says something about our own, the limits of our own intelligence as humans and the overreach that comes with our human hubris. That is to say, we'll create some kind of agent and we'll say, make us paper clips. And it's like, okay, making paper clips. And we put no constraints on it because we just want it to make paper. And it does. And it's like, okay, thanks very much. I'm done with the raw materials I gave. Now there's some cars in the parking lot. I'm going to take those and I'm going to make paper clips. And then I'm going to, oh, there's a sitting nearby Chicago. Let's take that and let's make paper clips too. And pretty soon the whole world is paper clips. And it's like, it's a wasteland and it just did what you told it to do with no constraints. It's that kind of unconstrained intelligence that could be a gotcha. It's like, it's just doing its job. And it's using its capacities to adapt and to learn and to create in ways you didn't anticipate. And then you've got a paper clip apocalypse. That's hard to say. That's a more likely scenario. Yeah, like that's actually, I mean, this is hard because, okay, so maybe we should pivot a little bit here or not pivot, but we are getting to some of the practical implications. Let's look forward a little bit. What are the practical implications in the next few years? Is this bigger than the personal computer revolution? Is this bigger than the invention of the internet, like some people are saying? Or is it just like, oh, that's kind of a cool gimmick. It's going to make some things in computing a lot easier and a lot nicer or simpler. That's the other side. And with that question, just kind of bookmark that question, I do want to read the statement because it feels looking to the future. The Center for AI Safety issued a statement that's been signed by dozens of experts all around the world, AI experts, including people like Bill Gates, Sam Altman, who is CEO of OpenAI. The statement is simply says this, mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war. So when they look to the future of where this thing could go in the next few years, they're seeing this as a very serious situation. I'd be curious what you guys have to say. And especially as we think of the church and God's people going into the next few years, this is going to open up a lot of conversations around what is intelligence, what is the human experience and so forth. I think it's important to be thinking about that thinking forward into those days when maybe an AI model is indistinguishable from a human, at least when you're using the chatbot, say. Sorry, this is pretty broad. But yeah, I'd be really curious what you guys would say. So I like the question. There is, I'd say, I'd at least say like I hate the word revolution when it comes to any form of innovation or in human history in general. It's just not a very good word. You want to talk about an act of creation. That's God's work. That's not something we get to recreate. And it tends to overlook some of the more sophisticated things that make certain inflections in history. Actually come about. So you could get this feeling and it's kind of popularized that say regulators, and you mentioned that bit of policy that people are getting together with concerns, like is anybody caring about, is anybody interested in it? Well, the reality is like governments have been funding AI research and regulating it and promoting it for a long time. So it's not unanticipated and the technologies that allow for this point to come have been developing for a long time as well. So this is in some ways, it's just an extension of existing policy and of existing interests and of existing cultural trends. With that being said, the moment that we have now is real. Our access to it, kind of like you were discussing earlier, there's something markedly different about this form of intelligence and how it interacts with us. And in its ability to do stuff that only we as humans could previously do. So I'll just start by maybe raising one issue that's related. Now I'm in education and chat GPT, Model 4 at least starting off and it just keeps on iterating very rapidly. We realized quickly that we can't keep up with this thing. You can't just make some kind of ironclad policy and deal with it. So with our students, the way we've encouraged them to think about this is that the value of this education is in your development, not just in your capacity to give us an essay, which you give you a good grade for something like this. This is about you and your formation and a satisfying human life. And so we deal with it not by making some kind of policy about artificial intelligence or large language learning models, but by covering with our plagiarism policy. If you use this to write your essay, that's just cheating. You're having another person do that work and we don't see any advantage to your formation as a student that way to kind of lay out a choice for them. What kind of life do you want? And we feel like the life that's going to be most satisfying is the one where you as the human author are contributing to your own formation, not outsourcing that to some kind of machine. That's one small thing in education. Yeah, Marlon, what do you think? Yeah, so first of all in the existential, you know, risks of extinction. That's a list of those, shall we? The paperclip scenario, nuclear war. Bioterrorism. Yes, exactly. I mean, some of those things are scary. I mean, ultimately, I think we do come back into the sense that God's not going to let human society end before it's supposed to end. It doesn't mean there won't be terrible things in the meantime. But I mean, we have been, well, all of my life. We have been living with the reality of multiple governments holding the amounts of weapons that maybe couldn't extinguish the human population, but could very, very quickly, you know, very quickly within a matter of a few hours, you could see half the Earth's population going. If a couple of things went wrong diplomatically and people started exchanging nuclear weapons. Does this accelerate that? Possibly. If for no other reason than it might be easier to develop powerful weapons as a small, lightly funded terrorist group than it is to deploy an ICBM without massive amounts of money. So yeah, probably more scary capabilities coming down the pike, although we've had plenty of scary capabilities for a very long time. In fact, I've always had them with boys that humans find to murder each other and kill each other, but more and more massive scales. But yeah, my more practical thoughts go down the same lines of education. Well, okay, what's the point of writing something if ChatGPT or Microsoft or whoever could generate the same thing? Or, you know, what's the point of developing this podcast if we could have had the whole thing synthesized by giving a prompt? We could have written this whole thing with ChatGPT and then taking it into an audio generator and created the podcast. And it would have been nothing. Better probably. In terms of more comprehensive and taking it all and trying to be judicious possibly. Yeah, maybe, right. So it goes back to the thing again of what, well, what is part of developing as a human being and what, you know, what is a job to outsource? So should you develop your sermons with it? How should you use it in your sermons or lectures or papers? I have not got into, you know, exploring how to use it. We did have an interesting little example this morning. Anabaptist perspectives had a board meeting and we were working on discussing briefly a document that our chairman and I had put some work into. And during the meeting, one of the board members feeds it to an AI model of some kind and says, could you reformat this? And it came out totally reorganized, most of the same content. The AI reorganized one is currently our working draft. It is not vetted and finalized and approved. But it is the working draft we're going to build off of. But it's a tool. And you don't feel somehow impoverished by that? Well, what do I need to be a human being? That's a fair question. So do you feel impoverished because you don't know how to use a slide rule? Do you feel impoverished to do mathematical calculations? Not particularly. Do you feel impoverished because you don't flip to the back of a math book and find logarithmic tables to do math computations and you type those into a calculator? Occasionally. Or find my way using a paper map? Yes. Sometimes, and there's something saying that... Now there, I could maybe agree with you. So we had, where my children go to school, we had discussion around teaching mathematics and the value of math and so on. And one thing we talked about was when do students use a calculator? Well, obviously not when you're trying to memorize the basics so that you can internalize basic arithmetic. But you use a calculator when the thing you're trying to learn is something else and you can use a tool to do the calculations. So yeah, you could try to use AI chat to outsource those things you need to wrestle with and outsource your thinking. That's a bad problem, a bad thing to do. But we're going to see a lot of places where it's a really good tool to use if you know where to use it. I mean, in fact, I keep thinking I should probably figure out how to use it, but anyhow. There can be this assumption and this is a general comment with some of the more, I'll just say there can be this assumption that just drives forward modern societies like ours or information societies like ours, that once we dispense with some of these tasks that we find menial or time consuming or just tedious or distasteful, once we successfully outsource those, then we can get on with the real business of living a meaningful life. When we're unchallenged, when we're kind of godlike in our capacities, and then we can finally surround ourselves with the sorts of things that we think will give us enjoyment. Therefore, I begin to wonder what is the substance behind it and what are some of the assumptions which that understanding of what makes a good human life, what are those assumptions driving? What it tends to have to do with self-definition? I get to decide what the format of my life is. I get to decide how much leisure I get a lot, hopefully. All these decisions that really just lends toward a predictable outcome of some kind of materialistic hedonism is where it tends to orient. And there's where you can feel like, yeah, useful tools sometimes for some tasks, but what is it that composes a meaningful human existence aside from the sorts of tasks that give and take, that help us to develop our character and form from our wrestling with things that frustrate or annoy us? And there's some of those really meaningful things that in my mind really displace the interests of some forms of AI anyway. How haven't we been there before? Oh, there's some great old stories about this. I'm thinking of this has been the constant human tendency all the way back to the very beginning. You go back to the garden, it's like, you know, I know what's good for me. All I needed a little more equipment. I'm just going to go ahead and do my tubal cane thing. And then I'm going to build my tower. We're going to be like God. And we can basically trust our intuitions and loves. That's a very old human story. Well, okay, back to the education thing. And what are you wanting out of education in your intellectual life? Like it seems like we've been in a place for a long time where we've looked at education as, well, let's learn all the tools that we need to achieve that just need some skills or need some content mastery. Right. Yeah. And we, you know, we got to think of our education in two ways. One is we have to learn how to use those skills. But then there's the other things that are, you know, part of thinking and part of understanding. So we don't teach, you know, writing in shorthand anymore. It used to be extremely important that somebody could take notes, could dash it out. You'd have the stenographers, they could record things. Well, then you got a keyboard. And you teach people how to use a keyboard really quickly. And now people don't know how to use a keyboard because they use speech to text or whatever. I think, you know, often the skill of using a keyboard is abandoned too quickly. But using a keyboard is not, you know, a key part of human life and education. Like, that's a practical skill. Now thinking through something and organizing your thoughts and creating an essay, I would say that is very often part of the human thing. Kind of predictive of your level of satisfaction you're going to get out of life too. Right. So yeah, replacing your essay with, and you're thinking about everything with a chat GPT generated summary, not such a good idea. Replacing your computer keyboard with speech to text, which is also AI-powered. Well, in principle, that could be fine. Yeah. Yeah. So, okay. So I want to back up a little bit to when you were talking about some of the risks and challenges with AI and especially when people go to the existential risks. And you made an interesting point there about how, well, okay, so we've been dealing with the risk essay of nuclear war for a very long time or some of these other, you know, or the Black Plague or, yeah, the Black Plague or like some really big, powerful forces that are out there. I think the one piece you had said something like this, these are tools. And the one factor is humans using these tools and could use them for really poor use cases or really bad ways. Is that not a lot of the challenge here? Like, some of the things you were just outlining, which may be very basic Monday and day-to-day things all the way up to the existential risk of AI being used for, I don't know, terrorism or something terrible. Is it not just human nature has the ability to do really bad things? Is that all we're looking at here? And we happen to have some much higher capabilities with computers that put that in stark, like, it's really visible now because we can see it. Or maybe I'm going down the wrong street. Yeah, I'd be curious about that because you were just referring to how we use the tool, basically. Yeah, I think it is the amplification of power. Yeah. But then, like anything else, there's unintended consequences of technologies. Like, yeah, amplification of power. Okay, now there's a printing press. Well, information spreads. People read. Now, there's a radio. People stop reading. And they think differently because of it or whatever. And yeah, unintended consequences. The way things are naturally used or you watch videos and you don't read. And so you come to a different way of processing, which pluses, minuses, whatever. But amplification of power, well, makes you more dangerous always. And that's, I think that kind of gets to the heart of some of the concerns. We just, how will this affect us, these tools? How will it shape how we think about things? And where will this take us, humanity? I don't know, however you want to say it. I don't know. It just feels like humans have such a tendency to use things for evil purposes. It does. I don't know. Maybe that's the part that scares people, but how will these tools be used? And it does seem like, yeah, like you were saying, it's an issue of capabilities. Suddenly, computer models are way more capable and have way greater reach. And maybe, and I think that will probably have to go for another episode, another day, on the whole concept of how will this affect things like truth and misinformation and how we learn things. Well, a picture doesn't prove anything anymore because you can generate it with an AI model in seconds. How will that affect how we look at truth, honestly? What will that do to us in the future? It feels like that's a bit of a separate conversation, but I think it's important to kind of note that and maybe we'll tackle that another time. I'm not sure what I'm saying. That's not much of a question, I guess, but if you're curious, yeah, do y'all have any responses to that? Maybe just a little bit of follow-up on what you're saying here. There's the questions about, in the future, AI taking some kind of agency of its own, an especially malevolent agency. Those are the ones that tend to capture our imaginations the most, where it's actually going to have some kind of pure intelligence and it becomes twisted, or at least to us, it seems twisted in for itself. It's like, yay, bad humans. Dill in the paper clips. And yeah, right. We'll just destroy, destroy. Those are the ones that tend to capture our imaginations, but I do think the more plausible questions have to do with futures where there's power that's being amplified, but also further segmented. As in, to the end, consumer, it could appear that there's subtle, but real changes and that to the mass of people, it's presented to them as just another tool. But in fact, power has been redistributed already. And even the internet could become dead in the sense of all the contents been already created. And it's recentered on people who stand to benefit from AI generated content who have access to that kind of capacity. The commercial sector could become further dominated and separated from the end user in a way that just makes a real disconnect of power between consumer and the realm of commerce or just all kinds of things could be further consolidated and broken down between those who have and those who have not. So here I am speaking beyond my expertise, but I will just say that it seems like in that scenario, it seems like that underscores the importance of, you know, widely distributed power and using computers in creative ways and open source projects and all these other things kind of at the margins of finding ways around it. You know, encrypted messages, ways of verifying things. How are we going to verify that this video is actually produced by Anabaptist Perspectives when somebody could go and quite plausibly in a few years and say, you know, produce an Anabaptist Perspectives video on this topic with Kyle as the guest and Reagan as the host? Like, that's not out of question that those could be, you know, passable before too long. At that point, you'll find me splitting wood. Okay, well, we cannot go any further than that. So maybe to circle it back around, it feels like, okay, so we got these advanced models, we've got this capability that could be scary, could have some real risks, but also it seems like a lot of it has to do with the intersection of how humans use these tools. It keeps coming back to, you know, humans built this, humans trained this, I mean, AI models trained off of human interactions on the internet, basically, or humans training it. It has a lot to do with who we are as humanity, really. It's maybe a bit broad, but to that effect. As we look to the future, how do we learn to cultivate things like community and real human experience, real human interaction, things that do really matter and understanding that. I don't know, basically coming back to who we are as humanity, that's a bit of a turn from the technology side, but maybe that would be a nice way to wrap this episode up. I'd be really curious, basically, the implications for how do we keep doing discipleship in a world full of you can get anything you need on the internet anymore. So, you know, where's the space for real human interaction? Well, quit listing this podcast and either in person or via one of these messaging apps. Why don't you get some way to come over to your house and eat some food? I like that. That's really good. Yeah, yeah. Yeah, actual hospitality. I'm thinking, this is a practical thing as well, but appreciate the full course of human life. Some of the problems that AI tends to address are fairly particular, at least, again, language models are fairly particular to kind of narrow segments of human life. They don't, I mean, babies don't really care and old people don't care too much either because it's not significantly, well, okay, I guess we could get into companionship, but I'm just saying that you cultivate an appreciation for an interaction with the full course of human life all the way from birth to death and allow those things as well influence what makes a meaningful life. One thing, do you want more? I have you got more, I'll take it, but you don't have to. Splitting wood helps, yeah. I do find that there's just a mental amount of satisfaction in closing the distance between our activity and meeting some kind of real need. Feeding an animal, splitting some wood and then bringing that inside and saying, okay, children, you were part of this. That's human satisfaction in some of its simplest and most robust senses. You've exercised some level of input, you're part of our family economy, and now we get to stay warm, and this is great. The more broad consideration, the more broad thing there is make your family economy such that your children can participate in. There's a lot of satisfaction there. Yeah, that's really good, the real world of interaction, and it is very easy to just disappear into the digital space anymore, so much of life. We work from home, whatever, you know, entertained by screens, yeah, maybe it's time to bring back some community and hospitality and so forth. These can be difficult things too, so that's another suggestion, like grow to appreciate some of the value of hard things. These are sensations that you can find unpleasant, difficult things generally are, but when you come alive to that part of life, difficulty doesn't always make us stronger people. It doesn't, but you can call it an awareness of and something of a thirst for stuff that's hard because you have increased capacity to enjoy life even more, so do the hard things. That's something that AI agency tends to actually take away from you, respect that. Any closing comment from me, Marlon? Well, my thermostat is taking a guess about when I'll be home so I can have the house warm when I get back. Aw, still consider it. Not actually, but that is the counterpoint to splitting wood. Yeah, now you can go and make podcasts. Well, thank you, Kyle and Marlon for coming on. The pleasure as always. Yeah, sharing about this is a bit of a big one to unpack and I'm sure we missed a lot, so we would love to hear from you, the audience, leave a comment and let us know what you think and maybe we'll address this more in the future. I feel like this is something we will be facing a lot more of in the coming years, so yeah, thanks again and thank you all for listening. Thanks for listening to this episode of Anabaptist Perspectives. It's a little different format than we typically do and a little bit of a different topic as well, so we would love to hear your feedback. What did you think and what are more things that we could cover on the topic of artificial intelligence? If you found this interesting, we did a three-part series on the internet algorithms and how that affects how we view the world, so you can check that out linked down in the description below. Thanks again for watching and we'll catch you in the next episode.