 Our chief scientist, Ethan Allen, wow. And I love these shows. Today it's all about likable science, which is originally Ethan's show. And it's today, it's all about AI, artificial intelligence and its potentials for good and evil. And wow, this is important. I hope you're taking notes. Hi, Ethan. How are you doing? I was watching a video on YouTube about the race we have with China on AI. And it's so interesting. It's different than the race we have with Russia. Russia's not nearly as advanced as the US is. And China is catching up. They claim to be able to overtake us by 2030. And I think they will because they're putting so much time and effort into AI. But what people don't understand, okay, is what it can do. And I know you've been thinking about that. What can it do, Ethan? I think maybe the question these days is what can't it do? I mean, it can do an amazing array of things that we thought not possible for technology to do a few years ago. An amazing array of really great, wonderful, powerful things. In the world of medicine, AI can take away a bunch of sort of tedious work of scanning images for very small discrepancies. So finding tumors before they grow, all that kind of stuff. And it can do it faster and better than our best human scanners. It can make business industrial processes much more efficient by seeing patterns that, again, we don't see because of all the complex multifaceted parts of supply chains, the bottlenecks in transportation and storage and human resources. We can take all those factors and put them all together and figure out the best algorithms for it. But on the other hand, it gets used in some very, not so good ways. The big one that I see these days is in the People's Republic of China, where particularly in the Western provinces, they're using it very much for a social control aspect where they gather data from people and they have cameras all around. They're doing facial recognition. Everyone has to carry their ID card. They have to use that for transactions. They have to use that for anything they do, basically. All this data is fed in a massive database and AI basically looks at this stuff and says, this person's changed their pattern. They're leaving by the back door of their house now. They were leaving by the front door all the time. I wonder why this is. Are they doing something they don't want you to know about? I mean, it gives this unprecedented level of control and it's frightening because this kind of technology was credited with basically, was supposed to open things up and democratize the world and give us all the power. And unfortunately, authoritarians seem to have usurped that to some extent. Yeah, I mean, any self-respecting authoritarian is gonna use the technology. I mean, Adolf Hitler did at the time and all that propaganda he did and he was using modern technology and the modern technology today is AI. So if you wanted to do surveillance and facial recognition, there's no question that your advanced techniques in AI will help you do that. What is interesting though, and it's worth discussing, is that, okay, it's two places in the road, a fork in the road. If you wanna be an authoritarian government, of course you would use AI to squash any protest. On the other hand, if you wanna be a protester, you use AI to efficiently organize your protest like with social media and all this. So there's a tension there between one and the other. My guess though is that the person who is putting more money into it, advancing AI faster than the other guy is gonna win that tension. So protesters, by definition, they don't have that skill. Governments do where they can get that skill. And at the end of the day, the old notion of changing regimes of having protesters seek and obtain some relief, it's not gonna work. Governments are gonna be better at surveillance and what did you call it? Social control, people control. Then protesters are at communicating protests. You agree? Absolutely, and it's why our governance of AI needs to be put in place and we need to be working very hard on that. And I saw an article today that suggested the European Union is actually now, essentially by their own rules, gonna have to dump a whole bunch of data that they have sort of inadvertently gathered or gathered without realizing or gathering it on people and that are under their own rules is they're not allowed to gather it. And good for them, I mean, good for them, that's great. I think we need that kind of sort of an ethical overlay on it because I mean, right now, while I just criticized PRC, we have big corporations in the US gathering amazing amounts of data and there's not even the government controlling how they use that. So it may not be quite as bad. They may not be able to call out the troops against you if they don't like what you're doing, but they are gathering pretty much a lot of the same kind of data. We don't quite have the camera arrays and the facial recognition stuff is why they spread in this country, but yeah. What's odd is maybe you can help me with this. In an authoritarian government like China, you know, they're gathering data. You know, you're getting it every which way and there's a huge library of data on you, you know that. Everyone in China knows that. In the US, we don't know that because the customer culture in the US is you don't tell people, you just take it. You just do it. And it's interesting that we're so into privacy and human rights and civil rights. Sometimes we pretend to be into that we aren't really, but we're not as advanced in terms of the transparency of the use of AI as Chinese. They tell them, everybody knows. I wonder when we're gonna catch up on that. This is all a big, you know, competition. It's a competition. It's a multifaceted competition. It's a colonoscopic competition. And I don't think, you know, the average person on the street in the United States appreciates in any significant way what AI is, what it can do and how it is affecting our life and how it will affect our life going forward. Yeah, I mean, it's ways large and small. So recently I had this slightly unnerving experience. I opened up my computer, typed in someone's name. Got there, this is a friend of mine. I just didn't, I was gonna go to her house. So I needed her address. Got her address out, picked up my phone, clicked it on to Google Maps. Her address was there. It was the first thing on my phone. Whoa, they're watching you, Ethan. All right, yeah. I mean, there was something on it going on there that somehow they understood. Oh, he's looking up in a dress here. Well, he's just gonna want it on his Google Maps on his phone. Even though they were completely separate applications Yeah, it was just a little like, Big Brother is watching you. And although in that case, you don't mind, you know, you say, gee, that's impressive. They know what I want before I know what I want. It's like a movie, it's like a science fiction movie all around you. So a bunch of things come to mind about this. It's so smart that it makes itself smarter. It develops its own intelligence beyond where you started with it. It's remarkable, you know, at the beginning, I mean, I took a little program one day, was organized by Kamehameha Schools. And they had some technology people there from Ocean as I remember. And they were telling us, you know, the fundamental principles of comparing images, comparing one piece of data against another. And you compare these different things that you're looking at at a very, very high speed with a lot of processing power. And you can make the matches. And when you make the matches, you have a little conclusion. And then you multiply this times a zillion and you get the most remarkable computing power ever imagined in the world. And you go through hundreds of millions of comparisons in a second. And then at the end of that processing, they know a lot. And it doesn't have to be comparing face facial recognition in Xinjiang, it could be, you know, comparing one virus against another. It could be solving the mysteries of cancer. It could be, gee, anything, anything where the human brain is not capable of doing it, even a group of people, not capable of remaining rational, cool, calm, collected, and making those zillions of comparisons. So this suggests to me something I've thought about since I started practicing law. You could have a little black box that would make legal decisions. It could look at every precedent. It could apply the precedent. You can say, oh, Jay, come on. You know, that's not appropriate. We are human beings and we have feelings and we have cultures and we have all of these human attributes. You can't let a little box tell you what the law is. Oh, really? I think it's coming. The other thing is you can go one step further than that and you can say, how about governing us? How about running Congress, for example, or being Congress, a little about that big, a little black box? The query, would that black box do better? Yes, it would. It's got to be in the future somewhere. The problem is it's going to get there only by some dystopian process. It's not going to get there because we had a big vote around the country. People don't understand. But one day we'll have a dictator, I think. That's the future. And a dystopian world that will follow that will be the little black box, which is controlled by the autocrat. And he may be a benign autocrat. Hopefully, or at least not too evil. But I think if you can solve these problems we're talking about, you can solve any problem. And China believes, and China is investing a lot of money into AI, that AI will rule the world in a few years. The competition will be resolved where any question. I mean, for example, they want Taiwan. Okay, we know they want Taiwan. So you could put it out in a combat information center. What are our options? How do we get Taiwan? We could do this, we could do that, we could do that, this and we could have a timeframe and we could deploy people and equipment and weapons and tell us, Mr. AI, what is the best approach? And in one second it will tell you the best approach from all that you ever knew or thought about with all of your resources. It's like a computer game, except it's real. And it gives you an answer that presumably you can rely on and you want Taiwan, you get Taiwan. I mean, this is gonna happen. Yes, I mean, one of the really tricky issues is that AI though, it typically works even in self-learning, self-teaching AI learns from data that you feed it, right? And therefore, if the data that it's fed is biased in some way, your AI learns that bias and incorporates it. And they have had a number of instances where, for instance, in facial recognition systems, they have come up where they include, reflect the bias of the developers of the system and the data that they were fed. And so you would have to, I would be very concerned in that case that depending upon who's programmed the data to take over Taiwan, you're gonna sort of get the answer you want in some sense. Oh, sure. You program it to be an autocratic government. You get an autocratic government. But let me add this stuff. The original programming, okay, can be modified by the AI itself. In other words, if you feed it also the basic morality, the basic norms of a given human civilization, say these are the ones you always have to look for. And then even an autocrat can feed in really evil stuff, but the AI can say, no, that's not right. I can't do that for you, Al. I can't do that for you, Al. And then so it's what it's doing is it's learning how to apply the norms, the morality, the ethics that you started out against somebody who might wanna override that. And so I think it's gonna develop a personality, never a consciousness. Consciousness is something that, maybe that'll happen someday, maybe in our lifetime, but not yet. And so the question is, who programs is first and who gives it that fundamental human set of values? Cause you can override that if you programmed it the wrong way. But if you programmed it with those values, then even an autocrat wouldn't be able to use it because the AI would learn. Right. I mean, you're sort of talking about Isaac Asimov's, the three laws of robotics, right? And the robots are not allowed to harm people. End of discussion, right? And yeah, and good luck. Oh, you're right. If you can imagine it, it can happen. Yeah, I mean, I think you're absolutely right. We need to have, there should be a whole sort of training for anyone who's going into AI development on ethical use of AI and what kind of ethics first you'd instill in yourself or get instilled in yourself and then instill those same ethics in your AI so that it works for good. All technologies, virtually all technologies are essentially value neutral, right? They have no inherent good or bad. It's what use you put them to, what we do with them, what we as people do. And so, yeah, trying to stack the deck so that your AI is used for good and not for evil is sort of the name of the game here, right? Putting good people in charge of AI development by asking them to be very careful about this rather than just saying, hey, make the AI do this for us, for of, you know, and use an ends justifies mean kind of means argument, you know? At the end of the day, we really have to have responsible people who are in charge of the AI. I mean, super responsible because there is super leverage, you know? Years ago, a given public official, he wouldn't have that much power. But as time goes by and you have communications, you have command control kinds of things happening, then the official has more power. And, you know, the president could push a button. Who knows what? I mean, it's the power is enormous, but you add to that AI and the power to control minds, the power to control public opinion, the power to control information, plus the power to control weapons. That person is really, really powerful. And that's why he has to be carefully selected. Maybe the AI should select him or her. Yeah, I mean, there's the whole interesting emerging sort of related field of, you know, human computer interfaces, right? Where it's being used in the medical field now to let persons who are spinal cord injured, paralyzed, move robotic arms by their thoughts, right? And so their robotic arm can reach out and get them a glass of water or, you know, help them whatever way it can, just by the person thinking about doing that themselves. Great stuff, wonderful stuff. But this is just, it's not even model A, model G technology though. I mean, this is pre that, you know, if it's- Yeah, I agree, it's peanuts compared to what can happen. Right, well, sorry, you're right. What will happen? So in China, it's not fooling around. They are using it for everything they can. They are building systems where they will use it for more and more things. They are very advanced. And although, you know, in the American style of exceptionalism, we believe we're the best sort of thing. I'm not sure we are the best, you know? The conventional wisdom would say, we're ahead. We're not ahead by much, but we're ahead. Well, how do we know that? They're spending, you know, I remember a lunch here in Honolulu where, and this has got to be maybe 15 years ago, where the vice mayor of Beijing was there at this lunch right here. And I said to him, you know, it's wonderful that you guys value engineering and 29% of your college graduates are engineers. And that speaks to your view of the future. And he said, you're right. We care a lot about engineering, but it's not 29% Fidel. It's 59% of our college graduates are engineers. That was 15 years ago. Now they're saying everybody's got to get into AI. Yeah, grown their scientific expertise tremendously. Right now, they're still limited. That is sort of ironically enough for them is sort of the cutting edge chips that you really need to do your AI well. Essentially, many of them are produced in Taiwan. Well, they want Taiwan. Right. But TMSC, isn't it, you know, Taiwan, something. And yeah, but again, that's sort of, it's a nice sort of Mexican standoff in an odd way, right? Because Taiwan does not want to do that. And I'm quite convinced that they'll blow up the factories if China tries to take them by force. You know, they'll say, fine, you know, you'll get a wasteland here and you'll get an island with a bunch of rubble on it, basically, and you know, too bad, you know. Oh, what a tragedy that would be for so many people and things around the world. You know, with AI, remember Andrew Yang, and you know, and he was talking about guaranteed income and all this, and with our technology, you know, we can, everyone can live without working a lot. With AI, it's possible to develop a civilization where it's all done for you. And wouldn't that be wonderful? Well, now that gets to a very interesting point. And you sent me a link earlier today or the other day about how AI unfortunately is typically is not being used to supplement people's work and may help them do their work better or more efficiently, but it's being to replace people. And so we have these bigger, more complex jobs being done with fewer and fewer people. I was shocked to read recently or here that these monster container ships that now exist, right? Container ships that are five football fields long that stand 35 stories or 15 stories high off the water line to their deck. Monstrous things. They run typically with a crew of about two dozen people and they can run with a crew as little as maybe a dozen people. Just, it's amazing if you think of some huge and like that, that you're running and you're moving up through the oceans and you get a dozen people who are doing that. I mean, it's, and you know, you know, a Roman galley took, you know, hundreds of people. That's right, that's right. They're very little payloads. So there's this odd thing that they have not unfortunately been used to supplement people and make our lives better. In every case, they have picked people out of jobs. They are indeed exacerbating the inequity in our society to some extent. There are the people who work with the machines who are doing pretty well. And then there are people who have lost their jobs because their part of the manufacturing chain is now being done by the machine. Yeah, but Andrew Yang would say, okay, all right, they lost their jobs, but we have the ability to take care of them. You know, the humane thing to do, but we have the power to do that and we can feed them, we can clothe them, we can provide housing for them. We could do all of that using this very special intelligence that we have. It ultimately falls on government and government is an expression in one way or the other of public opinion. And I feel that government and public opinion are way behind where we could be. In fact, to talk about good and evil, although Russia is not nearly as advanced as the US or China in terms of AI, they use it for nefarious reasons. They're collecting data, they're using the data to try to twist public opinion, twist votes, you know, Cambridge Analytica was working with them or for them or something. And the internet research agency is busy, busy, busy trying to control every election in every state and the federal system in the US right now. And we are really not able to stop them. And so how do they do that? Well, it's very interesting. It's, I'm sure it's done with AI. Let me tell you a short story I read recently about movies on, I think it's Twitter. So you see a little clip, okay? You like the clip. You stay around for 10 seconds, 15 seconds. It's making a record of that. It knows you like that clip. Okay, next time it's gonna present to you another movie on the same subject. It's like that weird thing with Google Earth, right? It knows you. And so it's going to evaluate your taste and your, maybe your political position too. And it's gonna put you in a bubble if it wants, okay? And that's what the internet research agency is doing. They don't care who's right and wrong. They just wanna divide the country. And they do this by putting you in a bubble that seems appropriate for what you are reading and thinking and watching. And it's not rocket science, but it is using AI. And so I think they have done a lot to create the division in our country and they're trying to do more of it because it's the new kind of war. It's AI war. And if they can divide us all, they can neutralize us as a great power. I think I guess I would characterize it more as it's the new generation of information ops, you know? Instead of just showering out propaganda, you now you create the truth. I mean, the classic movie, Wag the Dog, right? I know this is becoming all too real that we are able to create the illusion of reality. And since we live in a world sort of this post-truth world where nobody really seems to care much about what really is the truth. If you can get a good, engaging story that sounds like it might be true and pretty effective. Then you can get some people really good and angry. It'll spread like wildfire. Everyone will believe it or enough people will believe it. It's truth value no longer counts sort of. And that's only with AI now, being able to start manipulating images in incredibly sophisticated ways. So if somebody takes some think tech clips, watches you for about 20 seconds on this clip and they can then fairly quickly and easily extract you, make you appear to be saying anything they want you to be saying, making any gestures they want you to be making. And then a few years ago, that was possible but that was a big complex expensive thing to do. Now, as I understand it, there are sort of essentially there are apps you can buy to do that and you can buy them fairly cheaply. This is a frightening thing. This is just gonna degrade the world of truth. And that's just all of it's already happening. All it's happening, but it's the pace of it is accelerating in an ugly way or now and it's getting into realms where it's gonna become more and more difficult. And you lose your ability to critical thinking. It's taken away from you somehow because you're infected. May I use that term infected by this stuff that's thrown at you. It's out of a novel. But I think what's interesting and you pointed it out a couple of times here today is that this is the intersection of science call it data science and social science. And when this machine, this AI machine creates a story that it wants you to believe that it sends to you because it thinks you're vulnerable to get into that bubble it's not accidental. It's not like some kid in the internet research agency in Moscow is designing a story. AI is designing the story. It's got a bunch of options. What don't we tell J. Fidel today that he will believe. And they know enough about me and about the world I live in to create a story that's completely believable. And that's the magic. Yeah, if they know for instance, what makes you angry and they know that you'll tend to re-send things that make you angry then hey, they'll send you stuff to make you angry in the way they want you angry to make you angry the things they want you to be angry at. And no, essentially they will have just been it will not have only infected you as it were. You will now go off and affect a lot of other people. You're an influential guy. So yeah, it is very hard. I would take issue with the fact that it doesn't I would argue it does not destroy critical thinking or even degrade it. It forces us to be more critical, more critical thinkers that forces us to hone those skills to use them to the best of our ability to not believe necessarily what we see to think carefully about why we're seeing that. Where did that come from? Who might have sent that for what reason? You know, and what is the evidence that supports it? What are the, what's the context that's being presented in? What are cultural examples too with it that might suggest it's not true? You know, forces us to, and if we don't do that and unfortunately there's no evidence that people do much of that anymore. Yes, then it's, you know, I'm not optimistic for the future. Well, yeah, I want to get to that, but let me just throw the name Fox News on the table here. Fox News is able to convince people of things that are untrue by the millions, you know, by the tens of millions of people, you know, seriously believe what Fox News tells them, even though a critical thinker would say, wait a minute, that doesn't make sense at all. And yet they buy into it. So I think, you know, it's a demographic issue. If I give you a country of 330 million people and I say, well, you know, maybe 40% of them are capable of critical thinking and they will be very careful and paranoid about any kind of input from any media. I said, good, you know, good for them. Their training, their education was good and their, what do you want to call it? Their ability to do critical thinking is good. But if the rest are unable to do that for any reason or don't want to do it, they'll fall into this vulnerability, this bubble thing. Gee whiz, we're sunk. And the notion of democracy really is based on critical thinking. The individualism of, you know, the early exceptional US where people would actually talk to each other personally, argue with each other, take issue and come to some kind, I don't want to say consensus, but at least, you know, some kind of critical thinking experience. I don't think that happens anymore. You get in your bubble, you stay there. Yeah, I hate to agree with you, but I am. To me, it seems and, you know, I spent most of my career in science education and I don't understand how we've so utterly failed to teach evidence-based thinking. So many people, so many of our leaders in Congress, in the Senate are blatantly ignoring evidence and choosing to promote a viewpoint that has nothing to do with reality, has nothing to do with the truth, has nothing to do with the evidence that is there. And just, you know, going down this path that they, for whatever reason, feel is appropriate. It's very, very, very sobering. And it's a ripe pomegranate for AI. It's just an easy job for AI to create public opinion out of thin air. Yeah, it feeds into all of our psychological biases and we all have these biases. We all, you know, sort of want to have things that resonate with us, that reinforce our existing beliefs. We cannot help that, you know. You want to, you will think that arguments that coincide with your own beliefs and wishes are better arguments than those that don't. I mean, just, you know, these are our human biases that they are there. And you know, we're wired that way in our brains. And we need to be aware of that. We all need to sort of, again, watch out for it in a very critical thinking, sort of metacognitive way, right? Yeah. Well, it's like it's a human condition has always had the devil and the devil and the angel advising each person. And that's what we have here, except that it's just, you're more vulnerable now because there's more noise coming at you and you can pick the wrong room. So what's interesting, and I want to close with this and ask you a hard question. So on the one hand, you have the devil and the other hand, you have the angel. On the one hand, you have somebody who's solving medical problems, doing remarkable science, remarkable, unbelievable. Science you would not have imagined when you studied years ago and took your degrees. And, you know, this will undoubtedly for its side of the equation, it will make our lives better in terms of the medicine, the communication, manufacturing where an individual can have an Andrew Yang kind of life, where somebody takes care of him. And maybe it could even improve government if it got into the right position. Okay, on the other side, there are people who would like to muck up public opinion, who would like to tell you lies and set it up so you believe the lies, who would like to use this as weapons, even weapons of mass destruction. And they're not the same person. You know, the fellow on the one side, he's a good decent human being with morality. On the other side, not the same person. And he's interested in destruction. Okay, and so, but the one thing they have in common is AI, that's what they have in common, including governments, the world government versus democratic government, all that. So my question to you is in the fullness of time, and that may be coming quickly because AI accelerates time, it accelerates the human experience into moments, seconds, nanoseconds. Which one will prevail? Which one will govern, define our future, Ethan? Well, let me just step in. I love that analogy, if you've got your angel and you've got your devil in a classic sort of way, we've all seen those images. In this era of information overload, it's as if you have actually several angels sitting on your shoulder, your own internal ones and some external ones too. But now you've got this massive crowd of devils, you know, your own demons, but tons and tons and tons of other ones. So all this noise and, you know, which are you gonna lean to, which are you gonna go with? Again, I'm an optimist. I ultimately believe we will work as human beings to do good. We will, our technology will ultimately help us, but I suspect we may be in some very rough patches before we get to that utopia. Utopia versus dystopia, there you have it. Ethan Allen, our chief scientist, wrestling today was the morality, the norms, the ethics of science in general, but especially now when it is coming at us so fast. Thank you, Ethan. I look forward to our next discussion. Thank you, Jay. I agree totally. Aloha.