 I'm Matt Butcher. I'm the CEO of Fermion, but I'm also a trained philosopher. So I went to school, studied philosophy. Kind of one of my interests was philosophy in computer science, which is, believe it or not, not a big discipline in philosophy. Finished with grad school. I paid my way through all three of my degrees with, by working as a software developer on the side. And I finished, started teaching and went, actually I think I liked software development a lot more. So I kind of put the philosophy career on hold, went in and started doing engineering, ended up as a CEO. And now I get to talk about philosophy. So it'll be fun. I one time gave a talk at the American Philosophical Association. So this is like the big philosophy conference. And my topic was open source and philosophy, open source and ethics. Literally had one person in my audience. So it's the end of the day and I am still super happy to see everyone of you in this room today. So we're gonna talk about large language models, LLMs, as one sort of vector for talking about AI and what AI means. I wanted to start off though with a little bit of a disclaimer about what we're not gonna do today. And the first one is, so this is gonna be a philosophy talk, so it's gonna be weird. We're gonna talk about sensory deprivation tanks. We're gonna talk about what it means to be an embodied being. We're gonna talk about consciousness. We're gonna talk about some of that kind of stuff. But what I am not gonna do is explain technically how LLMs work. So we're gonna do a lot of thought experiments, but this is not gonna have anything to do with the matrix math that's going on behind the scenes in an LLM. And I add that as a caveat because I also wanna say that also the argument I'm making has nothing to do with whether or not we have reached any kind of AI consciousness at this point, whether an LLM is conscious. I know there have been some interesting arguments made about that recently, but that's not necessarily what I'm arguing here, but we'll touch on it here and there because a lot of philosophy has sort of danced around the topic of or really addressed head on the topic of what does it mean to be conscious? Is a jellyfish conscious, right? Is a plant conscious, probably not. But now we have machines that do display some elements of that. That's not what we're gonna really be focused on though. We're gonna be talking about ethics and we're gonna get to it in a very interesting and sort of roundabout way. So one way to think about what philosophy is is as an attempt by us as beings of somewhat limited cognitive capacity to make sense of the world around us. And so we look at the world and we say what kind of constituent parts compose the world? And that's the discipline we call metaphysics. Or we say, what does it mean when I say that I know something? What are the criteria and for knowledge that bump me up past just belief or just opinion and into the category of knowledge? That's epistemology. Or we might say, how do I know how to interact with a group of people in a way that's sensitive to the needs, desires, wants and humanity of these other people and that's ethics? It seems like when we look at things and throw out a question like what are the constituent parts of the world? It seems like you can go, oh, well, it's pretty easy. We look around and we see this stuff and the question is answered. But if there's one thing philosophy has demonstrated it's that everything is more complex than it appears at face value. And so a lot of times philosophers will start with a question like what does it mean to know something? Or if I strip away the sense data, what am I left with as the reality of the world? And they get into that and you're instantly like up to your neck, kind of drowned, trying to prevent yourself from drowning in sort of concepts and confusing pieces of evidence. And one way that philosophers have dealt with this is to develop tools to simplify. So one tool that any of us who work in computers at all are very familiar with is logic, right? Logic helps us, you know, symbolize how we can reason through a particular problem. But another one that's less rigorous, we'll say, is the philosophical thought experiment. And that's what we're gonna do today. We're gonna talk about a couple, we're gonna do one and we're gonna come out of that and see did we learn anything out of this? A philosophical thought experiment is basically a way to sort of comment the problem from the side and say, here's a metaphor, here's a story that will help me get very specific about a thing I'm reasoning about and maybe give me some clarity of insight where I can then take a step back after that and say, okay, here's what I learned. So here are a couple of examples of philosophical thought experiments. A very famous one in Plato's Republic, he talks about the cave, right? This idea that a prisoner is looking at shadows on the wall and then discovers at some point when he frees himself of his shackles that the shadows are merely dim representations of the reality that was behind him the whole time. Descartes, we'll talk about this one in just a moment, the evil deceiver problem. The trolley car problem, we've probably heard this one, trolley's coming down the tracks, it's gonna run into several people you can pull a lever and steer it and it only hits a couple of people, what do you do? That kind of thing. These are experiments and they're designed to get us to say, oh, okay, maybe this ethical reasoning is a little harder than I thought. What does it mean if I pull a lever and I'm acting as an agent instead of just letting the trolley car run by and run over five people? So let's talk about Descartes for a little bit. Descartes, philosopher, mathematician, scientist in the 17th century, he's reading some philosophers that were called the academic skeptics and some others called the Peronian skeptics. These are two different groups of skeptical thinkers who are essentially arguing back and forth with each other about how little we know. And this is rocking Descartes' world because he's going, I don't wanna find out how little I know, I wanna find out what I actually do know. And so he runs into this skeptical argument that goes something like this. You think you know because you're touching a thing that this thing is here and it feels sturdy. How do you know that you're really experiencing the world as it is? And Descartes takes this to a ridiculous extreme. He pulls off this philosophical thought experiment and he says, so again, and as background. So Descartes is a dualist. What that means is he believes that the mind operates independently of the body. So his experiment goes something like this. What if there's an evil demon who is interacting with my mind and making my mind think that there's a body. So when I think I'm taking my hand and placing it on this rail, there's no hand, there's no rail. It's just an evil demon stimulating my mind so that I think that there's a hand and there's a rail. And he says, now imagine if all your senses are like this and really an evil demon is deceiving you about all of these things. Hillary Putnam in the 1980s went, okay, evil demon not really resonating with the crowd. So let's talk about this in more physicalist terms. What if I'm a brain in a vat of nutrient-rich goo and my brain is being stimulated by electrical impulses that simulate my hand touching a rail but really I'm just a brain in a vat of goo. So what are they getting at with these obviously very bizarre thought experiments? Well what they're raising is the doubt that maybe my sense perception, my primary way of interacting with the world is dubious and is not truthy and that maybe there really isn't a world around me. So Descartes goes into this and this is genuinely disturbing to Descartes and he's feeling very anxious about this and he's going, what if I don't know anything? Is there anything I know? I mean I'm thinking about all these things and it's raising the anxiety level. What if there's no external world? What if there's nothing? What if there's nothing I know? And then he pauses and this is in the meditations of Descartes. That's the name of the book, the meditations, great book. He says, well wait a minute. Then I realized that I'm thinking about these things and this is where Kogito Ergo-Sum comes from, right? I think therefore I am. And Descartes says what? I'm a thinking thing. I'm having thoughts. That's me. I've just demonstrated two absolute facts that I know. I exist and I'm a thinking thing. And then Descartes goes on from that and ends up building his whole system of metaphysics on top of this and arrives back at the case where he says, and that's why I can trust that when I feel my hand touch a rail because of this and this and this and this going all the way back to Kogito Ergo-Sum, I know that there's a physical world around me. So the thought experiment that Descartes does is designed to sort of introduce radical doubt to force him into this position of saying what do I actually know? Now that particular thought experiment will come up a little bit here but we're gonna go quickly to another thought experiment. 1974, a philosopher named Thomas Nagel wrote this great essay called What is it like to be a bat? And he says, what he's inquiring about is consciousness in general. How do I know that an animal is conscious? What do I know about the internal reasoning or the internal, what you would call phenomenology, the internal experiences of another creature? And he picks bat because bat is very interesting. A bat has a sense perception facility that we as humans don't have or if we've got it, we don't know how to use it. A bat locates itself using echolocation. So he's kind of throwing out this idea that what does the world look like to a bat who's using echolocation rather than vision to figure out what their world looks like. And we as humans where vision is our primary way of kind of getting the lay of the land, so to speak. But a bat is using a very different sense perception. I tried this experiment when I read the essay years ago and I went, okay, so what would the world look like to me if I tried to envision it in an echolocation? And what struck me was, in all cases, I came back to the really dumb movie sonar kind of bloop, bloop, bloop thing on the green, radiating dial kind of thing. And I realized all I can do, the best I can do when trying to imagine myself as a bat is substitute visual stimuli in for echolocation. And that was exactly Nagel's point. We actually can't know what it would be like if our primary sense perception mechanism was echolocation. Because the best we can do is say, I as a human imagining myself to be a bat would envision this thing using a visual metaphor, right? So, you know, Nagel basically goes on to say, well what we learn from imagining ourselves as a bat is that there's some level of incommensurability between what I experience and what a bat experiences. And probably what a jellyfish experiences. And he goes on to talk about, you know, imagine there's extraterrestrial life who would have a completely different physiology than what I have. Certainly we're not willing to say that they can't be intelligent, that they can't experience the world, but we have to acknowledge the fact that they would be alien to us. And consequently we might not be able to understand how they envision the world. And that's where this last quote comes from, you know, one point in the essay he says, the fact that we can't expect ever to accommodate in our language a detailed description of Martian or bat phenomenology, how they interact with the world, that shouldn't lead us to dismiss as meaningless to claim that bats or Martians have experiences fully comparable in richness or detail to our own. In other words, we're stuck in one of those cases where we can't actually know what it's like to be a bat, but that doesn't force us to say, well consequently bats don't really have any quality of phenomenological life. Okay, we've just used two of these philosophical thought experiments to kind of frame out what they are, how they function, and then how you kind of take a step back from that and say, okay, how do I reason about that weird thought experiment we just did. Now let's do one of our own. So I had a friend come up to me a couple months ago, we were talking about AI, we were talking about LLMs, and he says, yeah, I just can't bring myself to use chat GPT, and I said, why not? And he said, well, what if when I'm typing this stuff in, it decides it doesn't like me, and it starts sending all of this information to some evil dictator that hates me? And I thought, well, that's a really weird thing to think. And then I thought, well actually no, it's not a terribly weird way to think, because what he was doing is ascribing to the LLM exactly the kinds of behavior that we wanted people to ascribe to an LLM. He was assuming the LLM is a thinking thing, and the thinking thing was like, oh, I don't like you, I'm gonna go do something because that's the way some thinking things interact with the world. And so this led me to go, okay, so how do I explain to somebody what it's like to be an LLM? And the idea of doing a philosophical plot experiment seemed fun, so indulge me for a minute, we're gonna try one here. This is the kind of thing where you kind of have to focus and think about this, sort of put yourself in this situation and let's kind of see where it goes. So imagine that you wake up to discovery or in some kind of sensory deprivation tank. In fact, you don't ever remember any life that wasn't in this environment. You have no recollections before that, no sense of history about who you are. But you do notice that like, even though you're pretty sure you have limbs, if you move them around, nothing, there's no air pressure or anything, nothing's triggering those little nerve endings and you don't feel yourself flapping, right? The world in front of you is sort of just gray, indistinguishable, doesn't matter where you turn your head or anything like that, it's all just the same. Can't smell anything, maybe the stuff you're in is liquid and you can't taste anything, everything is just neutral, right? It's just a completely neutral world around you. So all the ways you normally reason about the world, interact with the world, they're kind of just neutral. So now imagine that you become aware over time that something's kind of feeding some information into you. You're seeing some symbols in your head, right? And you pay attention to this because it's weird. And over time, and you don't know how much time because there's no way to distinguish one moment from the other when you don't have any of this sense perception, but over time, you start to realize that there are patterns and then out of the patterns, you're starting to see words and then sentences and then it's starting to make sense. You're getting some structured data and over lengths of time, you start to realize you're gaining libraries and libraries full of information about animals and about legal proceedings and about chats that seem to happen between two people and about legends and mythologies and stories and all of this information is feeding into you in sort of like this encyclopedic way. And at some point you're like, I've absorbed libraries worth of information, but again, you can't see anything, can't feel anything, can't smell or taste. Then one day you become aware of another interesting sensation. It appears that somehow you sort of received a message that seems to be waiting for you to do something. The message is very simple, says, describe a bird. Now you've never seen a bird or smelled or tasted, in the case of chicken, a bird. And you don't really know what it means to fly or to nest or to lay an egg, but you've seen all of these words many, many times in many, many different pieces of information you've received from this facility, right? And so you think about what you've learned and you construct the following reply. A bird can explode in flame and be born from ashes. Birds are used to deliver newborn humans. Some birds have long legs and do not fly, while others do. Now this makes perfect sense to you because you've seen all of these things in the body of texts that you've been presented with over time. But then you get back a follow-up response that says, without resorting to mythology or folklore, explain to me what a bird is. Now what does that mean to you? Because you don't necessarily have a concept of what something means to be mythological versus factual because you've never experienced any of those things, right? Instead, you're just looking at bodies of texts and saying, well, the word mythology seems to be gathered around, these terms seem to be gathered around the word mythology and these are folklore and this fact. And so then you can start saying, okay, in this corpus of text, I'm gonna narrow down on just this part and say, okay, I'm gonna grab out the description of a bird from within these new constraints that have just been placed on me and ignore the folklore and ignore mythology. And then you might come back with something like, birds are feathered creatures, most of them fly, most of them live in nests, some of them eat worms. What you've done there is not based on any of your experience of birds in the world. You've still never seen, tasted, smelled, heard a bird, but you've come up with a reply that satiates the desire of the person who asked, of the being, of whatever it was who asked on the other end of this mysterious connection you've got. And that's really more the way that the LLM is working, and it's very easy for us in this thought experiment then to say, right, I get answers back out that actually seem to correlate with my experience of a bird because I've seen birds fly, I've seen nests in trees, I know what air is and I know what it means to fly. But I also can understand that then it's generated just out of essentially a giant network of related words. Now there are two then very interesting things that come out of this experiment that we need to keep in mind when we're conveying to the person who asks us, you know, how do I know that my LLM isn't sending all this stuff over to an evil dictator somewhere? We can say, okay, so there are two things that are missing from the LLM. And those two things make it, you know, incapable at this point of being able to do the kinds of things that we do. The first one is experience, right? Phenomenology was the word negal used, right? An LLM has no experience of the things we're talking about. I'm gonna throw out some emotionally loaded words here, right? We can talk about, you know, marriage. We can talk about love. We can talk about Nazis. We can talk about genocide. We have very strong emotional reactions to many of these words. Why? Well, because I'm an embodied creature and I know what genocide means and it's a deplorable practice, right? I know what love is and it makes me feel warm inside. I can feel it because I'm an embodied creature and that experience of feeling the emotion shapes the way I interpret the text. An LLM has none of that. So experience is something that's lacking in an LLM and it's very easy for us as humans and particularly for people that we've run into that are like this AI stuff is amazing, right? It's easy for people to see the textual nature of this and interpret it back out to us as something that's reflecting experience when in fact experience is the thing that's absent here, right? Phenomenology, lived experience, embodiment is the thing that's absent. The other thing though is something that we don't give LLMs now, but that I lose sleep over and that's agency. So what is agency? It's a fancy word in philosophy that is sort of a proxy for the responsibility you take when you act on something. So let's do a little very quick experiment to say, okay, you've got a buzzer sitting in front of you, right? And when you press the buzzer, a light lights up on the wall. If I say press the buzzer and you take your hand and you press the buzzer, the light comes on on the wall. If I say press the buzzer and you refuse to press the buzzer, the light does not come up on the wall. If I say press the buzzer, then I put your hand on the buzzer and I smash it down so that the light goes up on the wall, the light comes up on the wall. But were you exercising agency in that case? No, because you didn't make the choice whether or not to press the buzzer. I made the choice for you. So when we talk about agency, we're talking about the ability of a thing to make a choice that's gonna interact with the external world around it. Right now we don't really give LLMs agency, right? At most when I'm interacting with chat GPT or when I'm running that sentiment analysis on something or in this example we gave here, right? When the LLM gets the prompt and returns the response, we're interacting in a very controlled environment. It becomes dangerous if we grant something agency and the ability to interact with the world around it. And yet it doesn't have the kind of experience that we do to be able to make the kinds of decisions that we make that are based on the fact that we're embodied in the world around us. And so one of my concerns is that in our rush toward developing AI technologies, if we rush to grant AI agency the ability to act in the world unmediated by humans, then we are running a danger in assuming that something understands us when it doesn't really understand us and then we're allowing it to act on our behalf, right? So I think that's kind of the kind of TLDR, the thing we learn from doing experiments like this is that we can start to ask about how we can understand the processing that's happening behind this and then we can draw some conclusions. You may disagree with me about whether or not we want to give them agency or whether even allowing it to do rote actions based on these things is true agency. But you can kind of see how we can open up a space for dialogue there. So in the end, we did an experiment that was based on sort of an ill-conceived misappropriation of Nagle, right? Nagle does this thing, what is it like to be a bat? And what's his conclusion? Well, you can't know anything about consciousness from doing a thought experiment like this because in the end, it's incommensurable to us what a bad experience is. We just can't understand what a bad experience is. If we were dealing with AI and if AI potentially was crossing that consciousness threshold, this thought experiment, Nagle demonstrates, wouldn't get us any closer to answering anything about whether or not it's conscious. But it does help us start to talk about ethics, right? A thought experiment can be successful when it gets us thinking philosophically about any of these things and in this case, I think that's where it's successful is in getting us to say, okay, now I understand when it's lacking certain things that we take for granted, its way of interacting with the world is gonna be different from our way, even if it looks like it's interacting in the same way. And so I think that even if Nagle's right, we can't say anything about consciousness, we do indeed get to learn something about ethics. With that, that's all I've got today. A version, a kind of stripped down, lightweight version of this is posted on our blog. The easiest way to find it is to just Google what is it like to be an LLM and it should come up. The essay, What Is It Like to Be a Bat? is also a fun read, it's about eight pages. It's a little philosophically dense, but it's kind of fun and the way Nagle talks is kind of an enjoyable way a way to read something. Do we have time for questions or are we, is that a yes? Okay, so we have time for a couple of questions or a couple of discussion points if anybody wants. And I know Alex has got one way back in the back. Interesting take. And I'm curious, this where it gets into semantics, doesn't it? I mean, isn't that kind of the root that we're seeing in hallucinations in some respect? Absolutely. Where we like if the LLM says I don't know, that's considered a hallucination, but we don't consider I don't know hallucination. We just think the person is saying I don't know. And so I guess it could say like, you could say to the robot or the robot could be taught to pet the hot dog or eat the hot dog, but then it could find out dog that it looks like it's hot and tries to eat it, right? I mean, that's kind of what we're talking about. Yeah, exactly, exactly. And that was the little example. We could go back to the little example from what's a bird, right? Well, a bird catches fire and is born from the ashes. It's locating its answers in a sort of semantic network. Quine, the philosopher Willard Von Oren, Quine had this great way of talking about how we as humans work on language. And he called it the web of belief. And the idea was that we form beliefs and then we connect our beliefs with other things. And we have only limited indirect control over what we believe. And that might sound implausible at first until you start to think about the things that you believe and realize that many of them are so deeply ingrained in who we are that we can't shift them around, right? We have a lot of beliefs about our identity, a lot of beliefs about the state of the world, we have a lot of beliefs about the state of our bodies, those kinds of things. And then they just tie out to other beliefs that reach all the way out to what you believe about the rightness or wrongness of putting catch up on a hot dog, right? And I think that is a helpful way of sort of thinking about the semantic orientation that you get in an LLM. They're not beliefs, they're just words, right? They're just statements. And you're asking it to locate, based on your question, a sort of conceptual locus and build its sentence from the context around there, right? And hallucination is gonna happen when it anchors in the wrong place or when the words that it generates seem to be connected but maybe through a path that wouldn't seem a normal way for us belief laden experiencing humans that it does for a purely semantic network where it's just constructing. That was a really long way of saying exactly what you said. So this may be a way for us to think about testing, doesn't it? Yeah, yeah. Do you wanna elaborate or do you wanna just? Well, I mean, I know what you're saying. Well, I mean, when you think about testing, right? We think of it now before the software goes into production and all the testing gets done and tested and tested and tested and you're always looking for the outliers and such. In this kind of context, the testing is not just between the machine and the machine, it's understanding the machine, the machine, the human interaction because the natural language processing requires us to understand what the actual communication is in the context of the communication between the machine and the human. And further, what's fascinating about testing as a discipline is that we test our code in the reverse order of how we usually tell people is best to reason. And what I mean by that is when we write software, we're writing using a deductive series of rules. We can always say, from step A, only these things can happen. From step B then, only these things can happen. We're writing a deductive system and we always tend to argue that deductive reasoning is the best. But what is testing? Testing is inductive reasoning. It's us going, well, I think the answer should be this. Is the answer this? Yes, good, I get a point. Software is working, I think the answer is that. Is it? Nope, okay, there must be a bug in my code. So we're testing a deductive system using inductive reasoning. LLMs actually give us some semblance of an inductive style of, I don't wanna call it reasoning, but at least generating prompts or generating output. It simulates more effectively what it's like to try and synthesize a bunch of information and put out something that is a likely answer in that scenario. And so you went the direction that I think is just totally fascinating. Like we can use these LLMs to simulate something that we try and do in order to test our deductive systems. And that's actually pretty fascinating because it's been sort of a vexed problem in testing that it's hard to generate text cases reasonably because you're always using the deductive method and then you want an inductive test going upward. And this is actually a really interesting way of doing that. I assume that is the universal sign for your time as up get off the stage. So thank you very much, I appreciate it. I know it's late in the day, this is a lot of fun for me, thanks.