 Welcome back to Think Tech. I'm Jay Fidel. This is Community Matters. We're going to talk about AI today in all its forms, and there are many of them with Elton Fukumoto. When the question before the house is, how is it going to change the world? How is it going to change our future? We'll be right back for that discussion. Elton Fukumoto, thank you for joining us here on Think Tech. Thank you, Jay, for inviting me. Well, you and I got into a discussion not too long ago about AI and chat, what have you, and since that time, it's like the subject has blown up. It's everywhere. It's on every media. Everybody's talking about it, and the trouble I have is I don't know where it's going, or whether I should advance myself into it or maybe back away from it. So, let me ask you the seminal question. How is it going to change our world? Artificial intelligence has the potential to change our world irrevocably. This could be artificial intelligence could be an existential threat to humanity. So, the title of this particular talk, AI and the Future of Humanity, Jay, that sounds so grandiose, but unfortunately, I believe it is appropriate, because if we do develop artificial intelligence that is equal to the intelligence of human beings, the problem is once we do that, it's a very short step to develop ultra-intelligent machines that are far smarter than we are. And I'll give you an example. AI researchers have told us for a long time that even to get a robot to walk the way human beings do is an incredibly difficult task. They thought it was going to be easy, but there's so much that goes into just being able to walk like a human being. The problem is once you get a robot to walk like a human being, it's a very short step from that to having that robot do a backflip. And if you know what a backflip is, Ozzie Smith of the St. Louis Cardinals used to do that in a baseball game, and boy, that's difficult. Human beings cannot do a backflip unless they're extremely gifted athletically. And this is what the machines will be able to do very shortly after. So we will be confronted with machines of our own creation that are a lot smarter than we are. And if we look at popular culture, the rise of the robots in the Terminator series and X Machina and iRobot, these robots take over the world. And I know that sounds like science fiction, but my talk today really is asking the question if these robots and if artificial intelligence does potentially pose an existential threat to the future of our species, shouldn't we be talking about it? Shouldn't we, for example, be giving it the same kind of attention that we do to climate change? Because I think the, actually for artificial intelligence, the potential downside for human beings is much, much more severe. You know, Elton, when I looked at the web, you know, in preparation for this show, I saw something that disturbed me. I'll tell you what it was. A teacher, a scientist, all commenting, such as you're commenting, on the future of AI. And the message that both of them gave was, we have to, we have to warn people to be careful. Well, we heard that noise before. We want people to be careful on social media. Are they careful? We want people to be careful about hacking on their computers and phones. Are they careful? The careful message really doesn't work at all. And it seems to me that the concern you expressed, in which they, these two videos that I looked at, they did not express, is that just being careful is not enough. If this affects the whole world, it affects every phone in the world, it affects every computer in the world, it affects every person in the world, all of human endeavors. And so being careful, you and I could be careful, but what about the guy who is not well-motivated, who is, you know, trying to do something nefarious? What about him? And so the question I put to you is, what about government regulation? You know, where does the First Amendment stand on all this? If we let it flow free the way we have essentially let social media flow free, we're in for a hard time. You know, it's going to be ultimately destructive. So AI, as you say, could ultimately be destructive. Don't we need regulation? And what kind of regulation do we need? Okay, those are good questions. So what I'm proposing that we need to do is to have a discussion and this would have to be government regulation. Unfortunately, it has to be government regulation of the sort that we see in climate change. In other words, it's not good enough for the United States to regulate artificial intelligence and research into this subject. All the countries in the world have to do it because not only the United States, but Britain, France, Germany, Israel, Japan, and of course, some of our adversaries, Russia and China, are also doing research into AI. And it doesn't do any good for some of us to regulate this and others not to. So it requires something that human beings are not very good at doing, which is getting together and saying, for the good of humanity, we need to suppress our desire to create these machines. So it's a really difficult talk. Well, let me ask you two differentials here. Let's suppose that there was an international organization that had the influence necessary to make every country come around to a kind of global regulation, every government, every country, a uniform regulation around the world. What would it say? What would it do? How would it regulate AI? Well, all right. So the consensus right now, and I'm quoting from a TED talk by Oxford philosopher Nick Bostrom. So he consulted his colleagues, and the consensus is we won't have AI for another 20 or 30 years. So 2040 to 2050. So we do have time if we were able to do something. And so the kind of regulation we would have is we would simply decline to do further research in this area, and we'd have to suppress research in this area. Nick Bostrom has his proposal, and his proposal is A, let's let the research and artificial intelligence continue, but we should do task B. And task B would be we build into these machines the values that we share. So he thinks task B is easier than task A. So if we're going to build these intelligent machines, make sure they share our value. And I think his solution is, well, there's a very small line. If you watch his talk, it's actually not very hopeful because if we make the slightest mistake, it's going to fall apart. Well, you say we, you're using the pronoun we and us and our, but who is to decide what these standards are? I mean, we can say, we can make the assumption that there's an international organization, a group of countries that will influence every country on the planet to do government regulation, but you're adding another layer here. And the layer is what are we building into this software? The moral standards, if you will. Who decides? Would you let Vladimir Putin decide? No. Would you let Vladimir Putin on a committee to decide? So who's the we? And who determines the global morality here? Well, I think those are good questions. And those are why, even though we have 20 or 30 years, it really doesn't look very hopeful. And it's worse than you say, because it's not simply that, well, we don't have our act together and so forth and so on. Human beings are not hardwired to think globally or to think 20, 30 years down the road. It's what concerns human beings. And this is true of climate change as well. Human beings don't really care about what happens 20 or 30 years from now. It's, I have to feed my family. Please help me now. This is, it's our thought processes are geared to what's local and what is present. And we don't, we have trouble thinking very far in the future and thinking globally. So we're not going to do very well. Well, I have to change my assumptions on you then. But by the way, I think it gets worse. I think people are all into self interest. What we've seen in this country with the 1% versus 99% where the 1% has all the money and doesn't care about the 99% is a kind of greed. It's a sort of reverse altruism. I don't care about my brother. I don't care about my neighbor. And that's not limited to the US. So what we have here is a failure of global concern about the other person. This is really bad. So, but let's build that into our discussion. Let me, let me offer you this. We're looking at AI and its future on humanity. And we have to assume, you know, the reality and the reality is self interest. The reality is that if Vladimir Putin wants to do the internet research agency in Moscow and he wants to use AI in order to attack the American democracy, he will be able to do that. Nobody is going to stop him. So what do you get if you go high octane on this kind of new technology and you let anybody use it anywhere in the world, including state actors, also non-state actors? What kind of world do you get, Elton? Well, I think you get a world in chaos because it's totally like the the wild west out there. And so here's a problem. If you if you have a very smart machine that's smarter than other machines, the danger is that machine could break into your encrypted computer system. So we have military encryption for the US military and whatever. If someone could break in, we'd be doomed. And the reason it's hard to break in is because our machines are are fairly smart. But if somebody had a smarter machine, they could break in. And for example, our nuclear capability could be neutralized. So we'd be in terrible trouble. So the solution I am proposing is and it is probably being done today already. It's kind of like with nuclear weapons. Despite the First Amendment, you cannot publish an article describing how to build a nuclear bomb. I think there's actually a case like that. Somebody wanted to publish material describing how to make a nuclear weapon. And a judge said, no, no, no, we're not going to do that. And so what we have in the nuclear field is we have companies that build these nuclear weapons. But I'm pretty sure they work for the United States government. I actually own shares in a company called Brunswick, which makes bowling balls. Remember, Brunswick when there were bowling alleys? Surprisingly, they also make nuclear warheads. I mean, that's a combination. Yeah, I know quite a combination. But their only customer, I'm sure, is the Defense Department. And I think that's what we need to do. We need to have a legislation in the United States that nationalizes or says, look, if you're doing work in artificial intelligence, your only customer is the Department of Defense. We will pay you to do this research. All your expenses will be taken care of. But if you come up with something, good, we're going to, we're your only customer. That will work only for the U.S., but it will work only for the designated activity. There are so many other destructive activities that are possible. Think of germ warfare, chemical warfare, all kinds of science. That's the weapon of mass destruction. I guess you'd have to make a list, but it would be a long list. And furthermore, I think actually, if you've got on the web right now, worse, if you've got on the web and used AI on the web, you can find out how to make a bomb or do other weapons of mass destruction. And a case or a statute isn't going to stop you. So this information is going to be more and more available as time goes by. And our existing legal infrastructure, our institutions, are really not able to stop it. Well, I hear what you're saying. All those other things that you mentioned, they're also probably, even though we don't know it, if you could build chemical weapons and so forth, I'm sure the U.S. government already has certain controls in place so that you can't sell those to just anybody, maybe to friendly governments, but certainly not just to anybody. You've got to get permission already. So for all of those other areas that you've mentioned, I think there already is some kind of implicit legal strategy that limits the dissemination of those dangerous weapons to either the United States government or customers who are approved by the United States government. You know, I can get on the web right now and call up a website in Russia. And I can ask it questions and get answers from it, AI or no AI that I could not ask, assuming there was some regulation in the United States. What I'm saying is the United States can't regulate the whole world. And so if we don't like weapons of mass destruction, we can maybe, through legislation or court action or a combination, through law enforcement agencies, we can stop it from happening or at least theoretically stop it from happening in the United States. But for example, gambling on the web, and so it's not legal in a lot of states, but shouldn't get on your computer and gamble in various places in the world and actually win or lose. What I'm saying is when you have a global possibility like this, it's very hard for one country to control it. I understand what you're saying, but I think let's put it this way. So if the United States does control it, I think it would be good for us from one perspective, because let's assume that we already have the lead in AI. And so what we want to do is to preserve our lead. And I hate to be chauvinistic about it, but if the United States is protected, that is, if our encrypted systems can't be hacked because we have the lead in AI, let's say we're good with that because we're the good guys. Now, if we're not the good guys, then that's a problem. But if we can take care, and there are many companies, you know, ChatGTP was created by OpenAI, which some guy in his garage probably just thought it up, and then Microsoft invested a billion dollars in it. So these guys are just shooting up, but as soon as they do develop something, we have to bring them into the government and say, look, if there's a military application for this, you're only customer is the United States. And I think we're already doing that for nuclear weapons or other weapons of mass destruction, so I think we can follow that paradigm. I think this would be a good time to take a snapshot of what we have. So is it Microsoft is developing a new AI chat program? I think it's off Bing. And I think Google is developing a new one. I think it's the Lord knows they have a lot of data to play with. I think it's Bard. And there'll be others. These guys can put billions in though, they'll be the leaders. And they'll exceed OpenAI by miles and miles in no time at all because they know this money to be made. OpenAI right now is free, although there's talk about how they would charge you a monthly subscription of 40 bucks or something for enhanced usage. Right now, the lines are jammed up with so many people trying it out that they're ready to charge you something to jump to the head of the line. These other guys are not going to hesitate. And if they give you the full throttle AI and you're a business, they're going to charge you a lot more than 40 bucks. And you're going to pay for it and you will know you're getting a great value for your money because it'll help you solve problems instantaneously. Business plans, budgets, all kinds of business, all kinds of innovative things right now instantly with a touch of a button. I went and looked on my phone. I went and looked at the Play Store on my Android phone. And I typed in AI or chat was it EPT. And dozens and dozens of apps came up. And I'm not even sure that OpenAI and the chat program was there. There are dozens of others. Everybody jumping on the bandwagon. And of course, if you're well funded like Google and Microsoft, you can do extraordinary things. But some of these other guys, they'll do extraordinary things too. So you talked about chaos. Right now, there's zero regulation that we know of, although I would have to agree. I don't think the federal government would be too happy if you were telling people how to make bombs. But there is very little on zero regulation. And now you look at my phone and you see dozens and dozens of people jumping on the competition. Each one thinks he's going to have the future of the world in the palm of his hands. Each one thinks that ultimately he'll be able to make tons and tons and tons of money telling his special use of AI. There's an infinite number of possibilities for AI to solve problems. It's not just one thing. So you talked about the word chaos, Jelton. I'd like to ask you what that means. Well, I guess it means that that regulation would be difficult. But let me suggest this. That at a certain point, it's not necessarily just a matter of the programming, but it's also a matter of the hardware. And I think it's easier to control the hardware. So give me an example. As you know, there is the famous breaking of the Nazi enigma code during World War II. And there's the imitation game, a movie about Alan Turing, one of the pioneers of AI. And he and Jack Good, who's one of the pioneers of, he coined the term ultra intelligent machine. They were working at Bletchley Park in Britain to try to break the Nazi code. But there were two parts of that. Number one, is the programming part. They had to understand how this machine was being programmed. But they also had to have an actual mockup of the machine itself, which they stole through regular espionage. So it doesn't work only to have the program without having the actual hardware. And so the hardware is where I think governments can control this thing. Because as far as chaos is concerned, yeah, anybody can get a program off of their iPhone. But how many people can actually control the hardware? That would be necessary. The artificial intelligent machine would have to have a very large set of servers. Because the human mind is actually astronomical. And so you'd need the hardware and that perhaps is more easily controlled. Well, it's also, this raises the question, Elton. And it's a big question is, we know the software industry is not going to stand by. This is too hot. Too many people are interested. The capacity, the capability of this kind of programming is unlimited. And it's really exciting when you think of the possibilities. We could talk about that for a while. But it seems to me that neither of the hardware companies are going to stand aside. They're going to say exactly what you were saying. They're going to say, we need more horsepower. We want bigger servers. We want to achieve every possibility that AI can offer us. So your desktop, your laptop, your phone. It's going to be different going forward. You ask about the future of humanity here. These hardware devices are going to change. And I'll bet you five, it's going to be in the near term. You won't have the same kind of computer on your desk. The server, the cloud, it's all going to be different because everybody will want AI. The world will want AI. And to go to your point, how do you control that? They're going to be busy all over the world. It's not limited to the US building ships, building cases, building circuit boards all over the world. How are you going to control that? Okay. So if I could just in the brief remaining time I have throw the monkey wrench in this whole procedure, there's one weakness in AI. I mean, we've been talking about it, but there is a counterargument. The counterargument says, well, these machines don't have emotions. As a matter of fact, they're not even conscious. And as a matter of fact, it's pretty clear that those computer scientists don't even know where to start making a machine conscious. So it could be that although these machines have a good chance of, they're very good at fooling us into simulating intelligence, it could be that actually they fall short. I mean, that's a possibility that's out there. And we should also consider that. But I take your point. And I think if you look at the landscape, it's pretty bleak. But let's hold out some hope that our reasoning is flawed somewhere. Well, the positive side of it is what makes people so excited. I mean, I asked AI to write a love poem to my wife. And it did immediately. And it was very good. I said to myself, I wish I could do that. I asked AI to answer a question on whether we should get a new couch for our living room. And it answered me and told me all the reasons I should get a new couch, and all the reasons I shouldn't get a new couch. This opens what you and I were talking about when we originally spent time on this subject. That is the law. Council on one side makes an argument. Council on the other than they make rebuttal. And then it goes to a little black box on the bench. Maybe six inches by six inches. And the black box listens to everything. And the black box makes a ruling. That's where we're going, isn't it? I think we're already there in the medical field. Artificial intelligence is already making diagnoses which are better than those done by doctors. So in life and death situations, the machine decides. And so I think we're already there in that field. And in the law, yeah, that could be our future. But again, in the law, and we're thinking about this, we're having that case decided by something which might be logical. It's very logical. Maybe the law is logical. But if the decider is not even conscious, I don't know how we can delegate that sort of decision to an entity which is not even conscious. I'd like to offer you two thoughts on that. Number one is when the Internet first came around, which was not that long ago, essentially 1995 when Bill Gates did a reverse move and all of a sudden took Microsoft into the Internet. That was really brilliant. There was not a lot of abuse. People were so excited to have this new way of communicating and learning and answering questions. It was so exciting, I remember. And it took a while. It took five, 10, maybe 15 years before the bad guys got involved and started abusing it with all of their hacking and fishing and what happened. So we had a moment of respite, if you will, from the excitement, the thrill of the new technology to the time when it was abused. And I suggest to you, Elton, that could happen here. The world, everyone in the world could be so excited about the possibilities that they don't use it, at least not at first, as something nefarious. Hard to say what kind of a honeymoon we might have. But I think we're still in the honeymoon phase. And I think that there's so many wonderful possibilities to help humanity that it's sort of irresistible to find the next one. Because you can make a lot of money. That's the way this is going to evolve. You will be able to make tons of money if you come up with an AI application that answers questions that could never have been answered before. That's one reaction I have. The second is, you talk about consciousness. You talk about emulating human consciousness, self-awareness. Like in the movie Hal 2000, Hal, I can't help you with that Hal. What was it? The 2001 Space Odyssey? That machine, although it was programmed and had limits, it did have consciousness arguably. And I believe, let me throw this at you and see what you think, I believe that we're not that far from a machine. Forget about doing backflips, but we're not that far from a machine that would essentially be able to look at itself, who have consciousness. What do you think? Well, the consciousness I'm thinking of is really basic. So the consciousness I am thinking of is something that a dog has and something that a rat has, something that every mammal has. The classic article is, what is it like to be a bat by Thomas Nagel? If you can conceive of what it is like to be a bat, blind, but sensing through echolocation, it's conscious. What is it like to be your smartphone? Flat. Nothing. Nothing happens. It's not conscious at all. We know that already. And so every mammal is conscious in a little way, but so far, none of our machines is conscious. And they wouldn't even know where to start. The AI people would not even know where to start to make our machines conscious. So I think that gives us some hope that machines are actually going to fall short. And they won't threaten us, even though we think they will right now. Okay, I'll buy that for the moment. But let me say, it would not surprise me that my Samsung began to develop consciousness in a while. And it began to have a self image of itself as a Samsung. And you and I should follow this, Delta. And we should get together and see, because this is so fast moving, that it's not something we can have one show about and stop. So we have to circle back and examine it from time to time. What do you think? Sounds good. Okay, Elton Fukumoto, an attorney been around the block and been thinking about AI. And this has been a great discussion. Thank you, Elton. Thank you very much, Jay. Aloha. Thank you so much for watching Think Tech Hawaii. If you like what we do, please like us and click the subscribe button on YouTube and the follow button on Vimeo. You can also follow us on Facebook, Instagram, and LinkedIn, and donate to us at thinktechawaii.com. Mahalo.