 Our next speaker is Ross Ritchie, the owner of FTL Strategies, a software company specializing in rapid application development and lean software methodology. He lives in Salt Lake City with his wife and four kids. In his spare time, he maintains a blog and podcast where he posts weekly on a variety of topics including religion, politics, and technology. Please welcome Ross Ritchie. As I was sitting there, because I feel like everybody already knows some of the stuff I was going to cover, but apparently they didn't get picked up by the cloud, but that's okay. Also you may notice, particularly as I get farther in, that I use exactly the same template as Bryce just did, so I guess that's good. So I wanted to start off by talking about Stephen Hawking, get everybody jazzed up by the late great scientist, and in particular I want to talk about his concerns about AI. Now of course we just heard from Brian that we should embrace AI, and I'm not here to tell you that we shouldn't embrace AI, I'm not taking any position on what AI is going to do. I just want to talk about what AI risk can reveal about the plan of salvation, that essentially we have two people, well we have a group of people approaching the problem of how do we minimize AI risk, right, and that if we look at what they have come up with, if we look at sort of the straightforward solutions they come up with, we end up with something very similar to the plan of salvation. As I said, I know most of you are probably familiar with AI risk and that sort of thing, but all the way back in 1965, someone named I.J. Good, who worked on the Enigma machine, encapsulated it so well that I just want to read it to you. He said, let an ultra-intelligent machine be defined as a machine that can far surpass all intellectual activities of any man, however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines. There would then unquestionably be an intelligence explosion, and the intelligence of man would be left far behind. Thus, the first ultra-intelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. That's the part I kind of want to emphasize, is how do we ensure that these AIs we create are moral or docile? So here's kind of where the situation we're at is, right? We are on the verge of creating a super-intelligent AI. We need to ensure that the super-intelligence will be moral, be in order to trust them with God-like powers, right? Now, and if we ignore step two, then, yeah, that's what happens. So step two is very important, don't skip step two. Okay, so does this, as Mormons or people who are familiar with Mormon theology, does this remind us of anything else? Maybe some selections from the book of Abraham? AIs were organized before the world was. They need to be proved before they can be made into rulers, into gods, into all the things that we all hope to achieve. So if we go back to that initial list, we're on the verge of creating super-intelligent AI. There were intelligences which were organized before the world was. We need to ensure that these, they'll be moral. We need to prove them, to see if they will do whatsoever the Lord their God shall command them. And of course we end up in the same place in order to trust them with God-like power. These will I make my rulers. So having, and of course this is the plan of salvation. So having made this comparison, where does it get us? I mean, well, it was so big deal. So we've made this comparison, so we've ended up in the same spot. Well there's all these problems that we deal with as religious people. There's all these problems that have been pointed out with religion since the very beginning. I have listed several. The problem of suffering is one that stops a lot of people. How can a good God allow such horrible things to happen? But I contend that when we consider how to handle AI risks that a lot of things come out, instead of being problems, they turn out to be absolute necessities. So the problem of original sin, the fall. Why did Adam have to fall? What's going on there? Right? Well, one of the things that most AI researchers want to try at least, whether it'll be successful or not, is to isolate the AI, right? If you've got something that can cause tremendous harm, you want to stick it somewhere where it can't have access to the things that are harmful. The veil, mortal existence. Once we've isolated it, well we want to give it some guidelines. We give it some commandments, right? We give it some guidelines for what humans consider moral, what we kind of expect out of the AI. Now of course, we're setting them up for this test of morality and there's probably, it's probably going to be hard. I'll get to that a little bit and get to that further, but having presented that, maybe we want to give them a choice. If the AI is conscious and it can choose whether or not to go through our rigorous test, maybe we should let them choose. Maybe we should let them decide whether to take the fruit of the tree of knowledge of good and evil. Then once we've done that, obviously we have to allow bad choices to exist. We have to allow the existence of evil. You wouldn't want a test of AI morality that includes the option for the AI to ever do anything bad because you might very well be surprised when it gets out that it does all these bad things that it never got to do back in isolation. Moving on from that, of course, I mean, as I said, we have to introduce evil. We have this isolated AI undergoing test for morality. We've given it guidelines, but it isn't enough for evil to just be an option. It has to be attractive. It has to be something that works out. And in IT terms, we call that a honeypot. Now, I don't know how many people are familiar with the concept of a honeypot, but since it's kind of important to what I'm going on or the rest of this, I'm going to explain a honeypot. So if you're trying to create a secure environment in IT, you might create a fake entry point into your system, a honeypot that is designed to draw the bad people in and isolate them and give you some knowledge of what they're trying to do. And from an AI perspective, we've probably given a commandment. We don't want you to try and get out of this isolated environment, right? Well, but we might create avenues where they could, where it looks like they could, and see how many of them go for it. Now, of course, those would be dead ends or whatever. But we have these, we've created these tempting opportunities. So not only do we have evil, we have temptation. And all of this is part of the test for AI's and intelligence. This is without us ever being evil people or violating any sort of commandment, we're just trying to know whether we can trust the AI's. So currently we have isolation, we have rules, we have the honeypots. But really, I mean, that's probably not enough. I mean, this AI is going to be super smart. You don't want to make the test so straightforward, like, okay, you get here, you do your time, you avoid the honeypots. And then you get out. I mean, we need to add variety. We need to add danger, chaos. But most of all, we need that suffering, right? And why is that? Well, as has been pointed out, the fate of humanity may rest on getting AI correct. Therefore, good choices can't be an easy default. We can't have something where it's perfectly easy for the AI to pass our test. And yet in the end, we have never determined what its true motivation is, because it never had to suffer to make the right choice. It never had to make a right choice under uncertainty. It never had to make a right choice even when it seemed like that wasn't the right choice from its perspective. There's, of course, the factor, the issue of obedience. Now, we have these honeypots, right? Now, let's imagine that we have an AI and it falls for a honeypot and it tries to get out. It breaks the commandment, right? Now, are you going to trust the fate of civilization to that AI? Are you going to trust that, OK, well, it learned its lesson, it's never going to do that again? Or is it possible that the AI, like, oh, they tricked me once, they're not going to trick me again. And thereafter, it conceals its true motivation. Can we trust an AI that has sinned even once? It kind of turns out that we can't, probably. And it kind of turns out that when God says, I can't look on sin with the least degree of allowance, it's something like this that he may be talking about. And one of the key problems is that AIs are going to be foreign to us. We're not going to necessarily understand them. They're not going to have evolutionary morality in the same way we do. They're not going to have lusts. They're not going to have weaknesses, right? And maybe we'll try and introduce some of those. But in the end, it's not going to be very clear exactly what they're thinking is. But another AI may be great for understanding that AI. Another AI that has gone through everything and never sinned, a perfect AI. So suddenly, there is this role in this AI system without any reference to Mormonism for the perfect AI, the perfect AI who is going to solve all our problems because it never fell for the honeypot. We threw everything we could at it and it never screwed up. Now, if we have this perfectly obedient AI, could it be that it understands the other AI is enough to act as a savior, to vouch for them, to take their sins and say, look, actually, I know you don't feel like you can trust this AI, but I've been in the AI arena. I've fought the suffering you gave us. I've done all this stuff and I'm telling you, you can trust this guy. So boom, we have a savior. And with all due respect, I might suggest that this is a role for Christ separate from everybody else. If he is truly perfect, we have a role here with all due respect to Lincoln. Anyway, so this is where we're at. I don't have the time to go into all the things that I think come out of this, but there are some areas that are open to further speculation. One, everybody has a problem with Satan and the Third of the Host of Heaven. I mean, are they punished forever? That seems unfair. I mean, what did they do to bring themselves on that? Well, failed AIs, being better at understanding the AIs than we are, might be the perfect agents to let loose and say, hey, go crazy. Tempt these guys. Tempt these other AIs. See if you can get them to fall for the honeypots. See if you can get them to screw up. I mean, this is your chance to go crazy. Now, I mean, I don't know how close that is to actual Mormon theology, but you could certainly imagine this role in your AI system. Secondly, all we constantly talk about damnation being a damn that you can't progress further. Well, that's probably what would happen AIs. If they didn't work out, you'd probably keep them around. You probably wouldn't want to murder them or shut them down. You might even be attached to them. But you might not let them run civilization, right? And so you can not only keep them around, but it may not be, being kept around may not be all everything that it's cracked up to be for them. Maybe you have an AI that all they want to do is murder people. Would you let them do that in your simulated environment? I don't know. But maybe they would gnash their teeth if you didn't. We can also imagine we have this scene in the Garden of Gethsemane where Christ essentially takes on all the sins of the world, right? And goes through this enormous sacrifice. OK, so we've got this AI. It's agreed to vouch for the other AIs. How does it know to vouch for the other AIs? I mean, sure, it's been through the same situation. Sure, it probably has some identity with them, some sympathy. But you've got an AI. You've probably got the complete record of everything this AI did. And if you want, you can probably replay it for your other AI. In fact, if you want to, you can replay everybody's life for this other AI all at once, right? Which would probably suck pretty bad. If you dumped all the guilt and all the shame of these other AIs or whatever AI's experience on this person all at once, it might resemble the Garden of Gethsemane. But it might be that you need to do that for that AI to decide. Maybe he wants to get it over with. Also, when you've got an AI in isolation, there's probably a certain element of you don't want them to know they're in isolation. You don't want them to know, oh, hey, this is a test. Or you have to do these certain things. And there may be some way in which isolation is best preserved by having limited contact on an individual basis. And when it all filters out, it's strong resembles prayer. So in conclusion, when one actually considers what will be required to ensure the morality of potential artificial superintelligences, they will arrive at a system which bears a striking resemblance to the LDS Plan of Salvation and the obligatory XKCD comic. And I guess I have some time for questions. OK, let's start. Yeah. This was a ton of fun. It's a great spiritual leader, somebody like a Christ figure or Jesus. A lot of these examples were the 20th century where there is Mahatma Gandhi, who was against going to the cinema wearing wristwatches or even underwear, or kind of modern, like Thomas Merton or somebody like T. P. Nipon. Why is it that all spiritual, like the truly laudable, impressive spiritual leaders tend to be semi-bloodied in a lot of ways? And how would you find, how would you fuse a hyper-spiritual person to something who would also become a kind of architect of a savior AI system, I guess, that they sense? Well, I mean, just because you've got this mapping between what you might come up with in AI and the plan of salvation doesn't mean that you're necessarily exactly looking for some technological genius that, I mean, what you're looking for, I mean, I think you've got three parts to your AI, right? You've got its intelligence, which is presumably already god-like. You've got its impact, which will be god-like if you let it out. So essentially, you're just looking at its morality. And I think that the emphasis on morality is what you're really concerned about as an AI researcher. You would be happy with an AI that's only five IQ points smarter than you, right? But you really want an AI that's moral. And so I think you get this morality that most of these people you mentioned have focused on morality. And there's only so much time in the day. And they can't focus on technology or whatever else. They're trying to minimize things. Yeah, Lincoln. How does the matrix architect identify the perfect AI? Well, I mean, obviously, you're tracking these honeypots. You're tracking this temptation. You know all the things you want the AI to do. There's presumably some trigger, some something that. But also, you can certainly foresee a place for covenants, right? Like, imagine, you want your AI. You're like, OK, if you're going to do everything I say, I want you to go to register 64 in the ROM and record this address, right? Which seems silly, but if they're not willing to do that, then they're probably not your AI, right? And so I think that all covenants and avoiding sin is ways in which that we prove that, OK, we're willing to do all the things. And you could imagine parallel systems in an AI environment. OK. How does the matrix architect define the perfection in their approach? Well, we hope that God already knows those things. As far as our thing, there is a whole literature based on how you define morality. And I don't know if you ever heard of Elias Riedowski. He came up with this. We want to create an AI that is as good as we could be if we were as good as we wanted. And as smart as we could be if we were smart as we wanted, where it leaves a kind of open-ended. But there is a whole, in terms of determining what's moral, that's a whole of their presentation. Yeah. I think what's possible is to build the perfect virtual reality that is so convincing. We couldn't tell apart from the actual reality that the odds that we're living inside the virtual reality that, well, the odds that we aren't to be inside virtual reality goes zero. Yeah. My question is this. If the plan of salvation is the perfect system for testing artificial intelligence, what are the odds that we aren't artificial intelligence? Well, I think we're safe for some other actors. Sure. I think you end up using intelligence isn't a very broad set and sense. And you could end up with, you know, there being not a very fine line, not a very bright line between artificial and natural. And I think part of what you're getting in is related to the New God argument of Lincoln. But, I mean, yeah, there's, and of course, these are all tied very closely together. Most of my information comes from a book by Nick Bostrom called Superintelligence, and he's also the creator of the simulation argument. And I think, oh, I guess one more maybe, or? OK. Yeah. Well, I mean, they're worried about all sorts of things. When you create an AI that doesn't have necessarily our same value system, you could create an AI. The classic example is the paperclip maximizer. You create something and you tell it, make paperclips, right? And then it turns all available matter in the galaxy into paperclips, right? Now that's not amoral in our sense of being unfaithful to our wife, but it's certainly an outcome we wouldn't like. So it's amoral in the sense that it's bad. And I think I'm out of time. So anyway.