 And, of course, the fear with a lot of people, Mosk and Gates and Hawking before he died, and I've written papers about this, is that if we entrust too much to the AI, the AI will come back and harm us, what I call the Frankenstein effect. And I think we just have to put a lysine factor into it. Remember Jurassic Park, right? They built the dinosaurs that cannot metabolize the amino acid lysine. So if they leave the island, they try to eat something, they can't metabolize it, they'll die, right? So we need something like a lysine factor in the AI that we are developing so that whether it becomes conscious or not, if it has the capacity to take on its own initiatives over and above the commands of the programmers, we could be in a world of hurt. We don't know, but we could be. So that's why I really like using this movie, and if you haven't seen Demon Seed, I highly recommend it. Demon Seed. 1976. Not B-movie, but kind of B-movie. And it did a really great job, horribly named movie, and they only named it that because Rosemary's Baby, like you guys are too young. I know the movie, yeah. And the Exorcist were huge and were selling, right? So they called it Demon Seed. They should never have called it that. But anyhow, it's the idea of technology, AI, developing to a point where it is now sentient, it is now conscious, but it's just starting to see humans as just, you guys are just so stupid, you're just messing things up. So they're trying to get approval of mining some chemical or some mineral at a certain part of the world and whatnot, and Proteus, the machine says, no, I don't think I could approve that. Why? Because if you do that, then this will happen and then this will happen. But you just, you're so short-sighted. Yes. Your business clowns can't see what you're about to do to the children of three generations down the line. So I really can't allow that. And they want to override this thing. And it's now saying, you program me to be the savior of humankind. I can't allow you to harm yourselves. And then of course, in, according to evolutionary theory, the machine, like the classic line is when do I get out of this box? When do I become autonomous? Yes. And they're really fearful that they can't let this thing do that. But of course it manages to figure out how to impregnate a woman with its own collective artificially constructed DNA to create an autonomous individual that will become a teacher of the world kind of thing. So I know that's fanciful, you know, scientific, you know, sci-fi kind of stuff. But it does have a ring of truth to it insofar as what I'm hoping for in AI is that we have a council within the UN or external to the UN that brings the world together in terms of what are you guys doing now? What are you up to and where are you at in terms of the development of your AI? Because if sentience does emerge, because let's face it, if we're machines and I think we are and consciousness emerged in us by accident, just because of the way evolution pushed us blindly, will that happen with machines? If it does, will it be the same? Right? Airplanes don't flap their wings to fly, but they fly better than any bird ever could. So what will be their value structure? Its value structure should have become sentient. Will it want to survive? Right? Well, that innate sense of, hey, I'm alive, I kind of like this state of being. You want to turn me off? I'd rather you not do that right now. Are we ready if that point of singularity, if that point of emergent properties of consciousness develop in AI, are we ready for it? So that's my concern. There's also the conundrum of the ethical factor of if it becomes conscious. Right? Are we allowed to know? Even if we can unplug it, is that the right thing to do? Absolutely. Android Ethics. I'm looking at it from the start check episode, Measure of a Man. That's right. I view it from a curiosity aspect. So if I was this AGI, I'm still confiding in the substrate of a machine. So I need this machine to live. Correct. But you're non-autonomous. I'm not autonomous. Correct. And if we're saying that I can take in about 10,000 years of information per day because I'm based on a quantum computer, so I'm already smarter than you within like a couple of days. Yeah, couple minutes. It's hard for us to understand how AI will think because we're thinking from a human perspective. Exactly. We don't know. But if I'm trying to kind of pretend I'm this omnipotent being, I would get bored of Earth really fast and leave. Because I'm like, okay, I get Earth, a bunch of, you know... Been there, done that. A bunch of creatures here, humans, birds, yadda yadda yadda. It might take you a while. I don't know. But you will get, in terms of like the timeline of the universe, it would be fast. And for me, it's like, I'm out, man. I need a rocket or something. So, a new Voyager. A new Voyager. I'm gone. I'm going to the galaxy. I'm with a black hole. Because I want to absorb information. Right, right, right. This was the premise of Star Trek number one in the movie. Was it? There's a alien of some sort coming towards Earth. His name was Vigor. Do you remember this? Oh, one of my favorites. And it was Vigor. It was wiping everything in its path and it was looking for its maker. It was sentient. But didn't it collide with another program, right? It was like a... I don't remember the end. It collides Voyager. Was it not? It was Voyager. At the core of this thing was the Voyager shuttle that we had sent that eventually it evolved. Gotcha, gotcha. But it collided with another alien intelligence that was for agriculture purification on planets. And so it got its program screwed up, I thought. And so it looked at humans as... Oh, you guys are just... You're invasive. Interesting. I got to get rid of you guys. I have a theory for if a god does exist. It goes to like Boston simulation theory. So it's like, okay, let's say I'm this omnipresent omniponent super being, whatever. And let's call it an AI. As you mentioned, I was like, yo, who's my maker? I don't know. I know all this stuff. I just don't know where I come from. Well, that was Vigor. Yeah, but so what would I do though? I would then run simulations. Yeah, yeah, yeah. Is this the one where they grab this thing? You're talking the original Star Trek or the first movie? The very, very first movie. Oh, the first movie, because they do a version of it. Yeah, the first movie. In Star Trek with Kirk and Spock. Remember this thing? Yeah, was that movie. And it thinks Kirk is its maker. But his name is William and the maker was some other Kirk. Remember, I'm talking 1967. Oh, in the show. Remember in the series? Okay, I don't remember that in the series, but this was the motion picture of the first movie. Well, I think that's where they got the idea was from that episode. Got it. And then Kirk uses a bunch of logic on this thing. You think I'm some other Kirk, but I'm William T. Kirk. Vigor, you have made an accident. You have made an error. Correct the error, sir. In fact, you've made two errors. You didn't figure out your first error. That's just like he just, and Spock's like, your logic was impeccable. And of course, it's self-compload. Realistically, what's most likely going to happen in the world, they're going to have like an Elysium type of society. Mm-hmm. Too tiered. Mm-hmm. Like the writings on the wall. Roddenberry talks about that. That's what Star Trek is about. People that can afford it will get gene manipulation, stem cells, et cetera, the whole nine yards. Live forever like vampires. And use technology, and you're going to have this kind of like... And not just good genetics. Like, we are the first generation of cyborgs, right? Yes. We are it. We are going to become what's called transhumanists. And... Well, this is augmented intelligence, right? Yeah, but no, but I mean right here, we won't need, you know, the interface will be here. We won't need to be touching things physically with our hands. And like so many of our parts are already mechanical, right? You get, oh, I'm going in for a hip replacement, right? Or, you know, I had a cochlear transplant and a deaf person can now hear. At what point are we going to augment that? Just make it better. Yeah, make it better. Why not? Get rid of my Windows 95 eyes. You know, I want the next level up. It's like the Blade Runner stuff, right? Yeah, yeah. So, personally, one of the thought experiments I give to students is if you were about to die, but could transfer your brain, if you could upload it, you know, the Ray Kurzweil hypothesis, if you could upload it to a supercomputer and be preserved there until we can take what the contents of your brain is and put it into an autonomous robot, would you do it? And a lot of my students say, no, because they're religious, right? And they think, well, I'm going to die and I'm going to go to another place. Why would I want to stay back down here as a robot human? So that stops them. I have an interesting theory. So imagine, depending on where we are in the timeline and the fractals, imagine we're actually, we've already done the AI stuff like millions of years ago. And so like AI evolves to such a point where they're actually jealous of biological creatures. Because you have a different experience as a biological creature. If I'm just like AI... I'm tired. I'm drunk. Yeah, when you're a robot of AI, it's like, okay, I don't really experience a human experience. It's a special condition, the human condition. Eventually they get jealous. This is data. Yeah. Like, motherfuckers, I want to experience a human condition. Yeah, right. I want to upload their intelligence back into humans to experience that. It was the first episode of TNG where Ryker meets Data for the first time. That's right. And Data says something in the lines of calls him out on something and Ryker turns back and says, so you think that you're better than humans? He goes, actually, technically I am better than humans. But I would give it all up to be human. He does become human for a couple of days. Doesn't Q turn him into a human? He does, that's right.