 Two o'clock, Rock. I'm Jay Fardale. Welcome back to ThinkTech. This is our likeable science series featuring our Chief Scientist, Ethan Allen. He's with us here today to discuss the incredible and potentially disruptive phenomenon of artificial intelligence. High-tech indeed. Sure, we've had AI for some years now. We've talked about it for a long time, but now with bigger, faster computers and huge, big database capacity, AI is coming of age. Really We can win at go against international champions. We can run trains and planes and cars. We can analyze things and make important human decisions that we could never, we never thought we could make before with computers alone. Now it's different. We can talk with them in any language and not even know their machines. We can have conversations with them and share our thoughts, even our private thoughts, and get their input. They can be our best advisors and maybe even our best friends. How? Remember how? Perhaps, just perhaps, we can endow them with the self-awareness feature. We can make them self-aware, just as humans are self-aware. Whatever self-awareness may, may really be. We seem to be at a tipping point right now on AI, and this could be the next really big technology to change our lives, our world, and the human condition itself. Let's find out more from Ethan Allen and what's going on in AI these days and how it will take us to the next chapter of human history and development, right here on Think Tech. Welcome back to your show, Ethan. Good to be here as always, guys. Nice to see you here. So what's going on? What is AI? What is AI? How does it work? How does it differ from all the other computer functions that surround us? Well, so, I mean, the idea of a thinking machine is actually apparently an ancient one. Apparently the ancient Greeks and all had envisioned this idea of thinking machines, machines that could behave like people. Nobody did much about it until maybe the 1600s. They began cranking out the first sort of crude sort of calculation machines, and it really took off in probably the 1950s. Some folks began getting machines that could really do interesting stuff beyond being just sort of big calculators of numbers. And they started to realize potential. And so for about a decade or so, it actually really took off to the extent where the founders of the whole AI field were predicting that within their generation, we'd have machines that were indistinguishable from people and blah, blah, blah. They, of course, are Rodman's universal robots. That's right. Remember that was a play in the 20s, and it talked about robots taking over the world, and they were like just like people. And, you know, that was pretty popular play, and that was way ahead of its time. Right, right. I mean, Frankenstein, you know, again, artificial life form, a machine that had human-like qualities. So it's been talked about and done, but it's been, in a sense, it's sort of like fusion energy, right? The real ideal of it keeps being elusive, and it keeps staying 25 years away. But it does seem, of light, so many things have happened in the last certainly five years with AI that it's really, seems like it, as you said, it's at a tipping point. Something is very different about the field. And part of it is because now there are the folks who in the know, and I'm not a computer scientist, let's be real clear about this, but the folks in the know, build machines that you don't have to program machine explicitly. The machine can look at data and look at more data and look at more data and make up its own programs, basically, based on all the data it's finding and the patterns in the data it's seeing. It figures out stuff. It makes sense of its world, basically, which is very, very different than what used to happen with machines. Machines were given very specific instructions. They followed them very well and very quickly. They can be very complex instructions, but that's all they did. And now they do other things. They actually think on their own, in that sense. You referred to AlphaGo, the go that playing computer that beat a world-class champion in March of this year. And AlphaGo initially did not know any go strategies. All it knew was the basic how to play the game, which is very simple. And what the object of the game is, which is to surround your opponent's pieces, and then it played literally millions and millions and millions of games against itself and gradually figured out the strategies so that ultimately it sat down against a world-class player and was able to defeat him four games out of five. Yeah, yeah. And it probably played those millions and millions of games fairly quickly. I mean, you have to wait, you know, two years for that. Right. And that suggests that what we have here is a confluence of better metrics. You know, computers now are faster. Everybody knows, you know, it was Moore's law, they get faster all the time, twice as fast every time you look. That makes them pretty fast, even cheap computers are pretty fast. Then, you know, you have greater memory, so you can do more in memory, and memory is really phenomenal, even on a household computer. And then you have storage, which is much greater than it was even five years ago. Again, Moore's law, you know, and finally, I guess we have languages now that are more sophisticated than they used to be. I mean, it's still all based on really three things that I can think of. You know, one is the if statement, which is exposed, I mean, expanded to the case statement, you know, if this, then that, or case this, then that, case that, then this. And finally, the ability to, you know, create functions. But I think AI must, I'm not a computer scientist either, but AI must be based on the notion that you can put in data, you can analyze the data, you can make conclusions about the data, and then you can write conclusions down into a database and remember your conclusions. It all seems very logical. Well, it's, it's, yeah, and it's with access to more and more data in these giant data sets and having access to the cloud, computers can not only bring data together, make conclusions from them, they can use those conclusions to begin making their next algorithms, yes, and makes sophisticated guesses about how, how things should happen or where we'll look next for more data that can reinforce their view or counteract it or whatever. Yes, yes, the cloud, I think is a big part of it, because that expands everything. And, and then, you know, the idea that with the fast speeds, and these lightning fast functions, you can take all that data, analyze it, and then put it in a database and remember it and then go back and get it really quick. And the other thing that I think is, is being critical here is there's an interface now with better sensor technology. So there's more kinds of sensors available, more sophisticated sensors. They're now working on hands on computers and grasp things. They finally have figured out how to make computer hands that can feel and feel and do very light touch, and so they can pick up very delicate objects when they do, or they can pick up, you know, very sturdy, heavy objects, but grip them much more firmly and they can tell the difference. And they won't crush an egg or a flower when they, but they can pick up a, you know, a steel bar, too. It strikes me, this is like distributed knowledge. Yes. In other words, that not somebody in a castle in Transylvania is, you know, designing a stuff. Right. It's being designed all over the world. Exactly. And of course there are, you know, engineers and laboratories hither and yon that are ahead of the pack. Right. But the fact is the pack is not that far behind. Right. Everybody's working on this stuff. Right. Yeah, I mean, there are, there are, here, every time you turn around, about new apps that you can do, they now have apps to work with your home lock systems that you can with your cell phone. You can unlock your door at home to let your friend on in when you're not there and lock the door behind them. You know. Right. And the ability, the ability to analyze large amounts of data, including sensor data, it's not just, you know, numbers and text. It's, you know, highly technical sensor data. And I'm reminded of that program, Person of Interest, where they can look, you know, using cameras, looking at a crowd and identify your face. That's happening. It's all over the world. Sure. And the people actually I had on my show here a bit ago from the Smart Yields group here in Hawaii, I mean, they were, they're setting up small farms now with huge arrays of sensors on the ground, sensors, soil sensors, salinity sensors, temperature sensors, pH sensors, you know, fungal sensors, and combined with sunlight meters and, you know, wind current sensors. And they combine this now with satellite data and they have computers that are analyzing all this in real time and controlling how much irrigation, how much fertilizer to put when, where on... Yeah, and don't forget the micro sensors in the biochemistry world. Oh, yeah. Even the atomic world where they can, you know, examine that and make analysis. I mean, the barrier between information, technology, and biochemistry, you know, it's blurred already. Oh, yeah. One reaches into the other and can evaluate. I'm actually working on a, just starting to work on a proposal with some folks at University of Washington for a synthetic biology program where they're going to be training students just to sort of cross that boundary, you know, between sort of the whole biochemistry life sciences and the chemical engineering, bioengineering side. So what do I have to have? You know, if I want to be a mad scientist and do some AI, what do I have to have? Do I need a room full of Watson computers? Or can I get along with a cloud and a relatively, you know, modest home computer? I'm quite frankly not sure. I don't know how much of this stuff is sort of outsourceable now. What I think you do have to know on is how to use some of the tools that are available now, things like Python, which is a thing that can look at lots of databases at once to help you pull the data you want and put the data from different databases together in meaningful ways. I have no idea how it works. But you know, one thing comes to mind is that, you know, not that it's easy, but there are a lot of open source programs out there. And I say open source, I mean free programs that are very powerful, more powerful than the traditional programs of 10 years ago. And if I'm a kid at 16 and you know, I'm inclined to, you know, work in this, I can get those programs on my machine instantly. Right. And you know, it's a parallel, but if I want to hack, if I want to be a bad guy and hack, I can download hacking programs, tools, various tools, also for free from the internet, and I can go do hacking. Well, by the same token, I can probably, I'm guessing, but I can probably hack, or rather, I probably download AI programs that can, you know, put logarithmic improvements on my open source programs. Now I can do remarkable things. And if those things are not available right now today, Ethan, soon they will be. Right, right. You know, I think so, all boats are rising. Very much, very much. And you know, I think that the people who can develop apps, because the platform's already there, the people who can address these problems that are a little more complex that computers have been able to solve before, they can make big money. They can solve problems we have never assumed we could solve. Now we can, now let your mind fly, right? Right. Yeah, I mean, can they deal with some of the big global issues? Well, that's true, you know. Just think, for example, if we can drive trains and cars and boats and all these things, and be smart enough for that, why can't we do diplomatic relations? Right. I mean, how complicated is that? If you put all the information in there and you have this self-learning capacity, that's really mind-boggling that the thing can learn global by itself. Right. And if we can learn diplomatic affairs and languages, right, languages are easy for AI, and then we can learn government. And I'm just getting scared now. Parts of the trouble will then be getting people to accept that, you know, and to accept that your AI is a neutral party and is gonna play fair, basically, right? Because the powers that we probably don't wish to go up, they're a power and authority very easily. Well, you know, we've seen that computers have taken over our lives in our society and that's why hacking is getting to be more of a problem because it affects, you know, everything we do, really. But if we give it even further license, if we give it further power, and we say, you know, you can control this part of government or all of government, you know, the big brother is watching you. Science fiction come true and now, you know, a little box that big controls the country, right? You know, that's okay as long as we program, securely program morality into that, you know, the fundamental, unchangeable human rights aspect where they never break these rules. Right, right. Like as in three laws, three laws, robotics, right? Yeah, but of course, of course, some bad guy, you know, who will get in and figure out how to override that. From Russia, some way, for example, can get into that box and he can change the fundamental rules or override the funnel now. Then all of humanity is at risk. Yes, exactly. So let's think about that for one minute and get scared over it and then we'll take a break and we'll come right back. Okay. Think Tech Hawaii covers stories that matter to tech and to Hawaii. I'm Elise Anderson and I'm Kaui Lucas. For our show next time, we're doing a Think Tech special, Home Alone and Homeless Alone at Christmas. We wanna learn more about the isolated, disconnected people alone in our community. Lots to come on Think Tech. Tune in 10, 30 p.m. this Sunday. See you then. Aloha, I'm Kaui Lucas. Post of Hawaii is my mainland here on Think Tech Hawaii every Friday at 3 p.m. We address issues of importance for those of us who live here on the most isolated land mass on the planet. Please come join me Fridays at 3 p.m. Mahalo. Yeah, you scared yet, Ethan? You know, we can design a new world. We can design everything. You know, right now, given the last election, some say there's a lot of disaffected people out there. They don't like what happens to government and they think it's dysfunctional and you know, there's some reason for that. Government can't seem to handle the country in its size and complexity and other countries the same thing. You know, our old form of government is established by the Constitution is under threat now because people aren't so happy with it and they don't think it's working. Well, I take AI and I make the whole thing totally rational and hopefully totally moral. You know, I build in the Bill of Rights and all that and I make it learn. I make it, you know, exercise judgment and awareness and I make a better country. I mean, why not? Right. I mean, it's sort of like the gentleman, Hans Kroc, who I had on a while ago with his oceanic thermal energy conversion. It's a system that, if you look at that by any rational means, you would say, well, we should all be using this. I mean, this makes such a great sense. It's non-polluting. It's efficient. It's effective. It produces fresh water as a by-product. Gives you hydrogen for fuel if you want it. I mean, sort of there's no reason in the world not to use it. And yet, okay, really, I mean, it's producing, you know, a few megawatts of electricity around the world at this point and not very much of that. Oh, it hasn't gone yet. Yeah, yeah. So if I was a completely rational governing computer, you'd have it everywhere, like a band of O-tech things around the tropics, basically, in tropical oceans, yeah. I would ask it the question, or I would ask itself the question, what do we do with O-tech? Right. And we'd say, oh, that's a good idea. We're gonna do that. Right, it begins to suck the excess carbon dioxide out of the atmosphere, cool as the ocean, stops it from acidifying. I mean, it has a lot of really good potential to do a lot of good things. You bring rationality. Yeah, but again, how do you get the world to buy in to be rational? I mean, all the economists used to believe that people behaved rationally until they saw the evidence, people don't behave rationally. Yeah, that's true, they don't. I mean, you know, my theory is the mammalian theory, everybody is a biochemical combination, and we are not driven by rationality, but it's other things. Right. I think maybe it's one of these things where, like I said, you know, every kid can download stuff to sort of, you know, start doing AI. And so now you have a lot of kids downloading a lot of stuff doing AI, and everybody gets the idea that AI is pretty good. And then you get government agencies that, well, we can, you know, distribute, you know, delegate part of our job here to AI, they can make decisions. So, you know, maybe even get up to the departmental level and say, well, we don't need the department head anymore. We'll just put the question into the AI machine and it'll answer. In fact, we let the AI machine decide the question. It'll answer it itself. And be just as smart as it can be. Then one day somebody says, you know, we got an older department head doing that. How about the chief executive? How about the legislature? How about the courts? You know, people have long felt that the courts really are just interpreters of data. Why don't we just put the rules in whatever they are, have the courts decide. And then, of course, the time will come when that black box says, well, what good are all these people running around? Like, why should we bother with them? Why should we give them any resources? That's not as efficient. But yeah, I mean, that's sort of a natural risk, isn't it? Right. And the stakes are really high then. Because you know, it could decide that we're all dispensable, all of us. One of them might accidentally pull the plug, yeah. And then we propagate a whole nation, a whole world of black boxes instead. That's right. They remanufacture themselves. Right, exactly. Well, that's, of course, in the whole... This, again, it's coming together of different technologies, the whole nanotechnology, self-assembly piece, where they're more and more able to build more and more sophisticated structures at the atomic and molecular level, and have these things self-build, huh, yeah. We can all be sitting on the couch with the bonbons, not have to work, while machines are running everything, you know? It reminds me of this thing about the base income initiatives going on in Finland and in Sacramento, where you just, you pay a given population a couple thousand dollars a month, and you don't have to work, we're just gonna take care of you. And I think that's a very ideal thing to do. And if we had computers doing all the work in the world and serving us hand and foot, then everybody could have this base level of income, and they could go shopping all day, and buy computers, I might add. Well, we'd have to think about, yes, what our purpose is to be done in life. Find people who do need to be driven by purpose. And if we don't have to make decision, if we don't have risks, if it's all just swell out of a brave new world, you know, maybe we lose some humanity in the process, and that's also scary, because then we become vestigial, all of us. Yeah, I mean, we've evolved in an atmosphere where threats have been sort of this constant, never-changing thing around us, and if essentially you sort of take and fulfill all of the levels of Maslow's hierarchy by machines, and there are no more threats, yeah, what are our brains gonna do with that? Yeah, well, I think technology is a great tool, but it's also a great risk, and it depends on how you treat it. I mean, using, for example, hacking technology to hack the elections of other countries, that's really bad stuff, and this conceivably, the AI could aid in doing just that in social manipulation, political manipulation. AI could be a great assist for that. AI could be used as a weapon of war, too, and destroy huge populations for one where you could create a whole war machine, a campaign for war, using, you know, we'll turn off the grid here, we'll manipulate the data there, we'll confuse the population there, and you won't have to fire a shot, but it'll be the end of the people. Movie Wag the Dog, right? Yeah. You're on a higher level, yeah, where you create a whole artificial war going on. So, I mean, for every kid who's developing AI, we need somebody else, same women, women, right? My AI is better than yours, and plus, it doesn't do the nasties in the end. It's nice to us. Maybe AI that's nasty wins in the end, so you have to have some moral authority that governs all of us, and, you know, query whether our governmental system, either in this country or outside, is equipped to deal with the dark side of technology, the dark side of AI coming out. No, I mean, we're seeing that, I think every way we're seeing it in biotechnology as well, and all these different areas are asking us questions and asking us to decide things that we weren't, we didn't even think about it for the few years ago now. Yeah, and now we have Mr. Putin is saying he's gonna scale up the atomic capacity, nuclear capacity for a war in Russia, and we have Mr. Trump saying he's gonna do the same thing here, and Presto Digito went back in the Cold War, which reveals the human condition, you know, with people not getting along and using technology as a weapon to destroy other people. And nations walking away from the World Court now, which is really disturbing to me because, I mean, if they'll do that, what's your odds on getting them to agree with this black box, rational black box, decide all the right and wrongs, you know? Yeah, well, in terms of, you know, so it goes back to a point you made a little while ago, and that is, how do you get people to agree that AI, to, you know, replace existing decision-making structures with AI? How are you gonna get them to agree? Because they have to concede their own authority that way. And yeah, people will happily do that in some cases, right, when they, you know, there hasn't been a lot of pushback against Roomba, right, the little robot goes around in vacuums, right? Because nobody wants to vacuums. No problem, yeah, yeah. Everyone will give up their right to vacuum, you know, and they'll give up, when there's an incentive, when it gives you more free time, more leisure time, saves you expense, energy, whatever, you know, people will. How about laying the track of a pipeline that's carrying oil? Right. That's, you know, controversial. How about placing telescopes on mountains exactly where do you place them? There's all these, you know, non-specific considerations and cultural psychology. Right, but again, this is just what AI is pressing into now, is being able to work in these messy situations where you can't, it's not absolute, it's not yeses and noes, it's not ones and zeros. It's, you know, there's a lot of different kinds of factors, you have to look at a lot of different things. Yeah, yeah, yeah. And assuming that it's presented correctly to the public who might otherwise argue about it, who will otherwise argue about it, if it's presented correctly and it has credibility, then it is what we ought to focus on. In other words, don't worry about it, you know, ruining the world. Let's just ask it, you know, relatively simple questions that might otherwise be controversial and feed it all the data we can possibly feed and let it turn around and come up with recommendations and that are, you know, well thought out and conceivably, you know, credible and believable. Well, again, I mean, it's somewhat like the people who say, you know, no genetically modified organism should be allowed. I mean, sorry, but that technology's out of the box. It's a pandora, it's been let loose. You know, it's gonna cause, it's gonna go. So we might as well regulate it as best we can, make use of it as best we can and proceed along with it. And same with AI, I mean, it's here to stay. We can't push it back into the box anymore and say, we're not gonna have smart machines. You know, smart machines are here. If we don't do it with somebody else, everyone knows how to make them now. So yeah, all we can do is try to, again, use it as sensibly and rationally as we can. And so it's doing as much good for as many people and as little harm to as few people as possible. Just hope that it isn't, you know, converted and it might be, it might even be now converting to weapons of war. I'd rather, you know, go to an AI box and say, what should be the rules about GMOs? What legislation do you recommend about GMOs? How are we gonna solve the problem of pesticides? And you tell it everything you know, it's not that much, we don't know that much. And given the biochemistry, given the social, all the protests, you have it. In fact, you have testimonies. People come and they talk in front of the box. They can do it all around the world. Once, a lot of data feeding in really fast. Examine the whole problem, everything. And she's probably gonna come up again and say, be more data, give me more information. But maybe, well, but you build in something where it says, you know, you have to do this within a certain amount of time and you can't break these rules of that rules. You can't break these rules of those rules and you know, you have to satisfy these considerations. And I mean, I can see this going up the chain of authority and maybe coming up with recommendations that would really be valuable to us. Yeah, I agree. I mean, I think AI has that potential to be a very useful adjunct to decision-making. The question is, it's still ultimately, at least for some while, gonna be a human decision. You can get the recommendations, your AI looks and says, hey, I've examined all the data around the world and here's what I think we ought to do about global warming or climate change or whatever. And maybe just run away as fast as we can. Yeah, right. Maybe stick your head in our sands and go, I don't know. Right, whatever it says. Over time, we may develop an experience factor that where these things actually making good recommendations and then we ultimately consider the possibility of putting them in charge. There's a risk to do that, but that may be in the offing if they work in the first analysis. Right, so for instance, if self-driving cars become relatively common, right? People will get to trust self-driving cars if there are more of them, they see more of them on the road, the accent rates go down, more people start using them, the accent rates continue to drop, more legislation gets written to encourage self-driving cars, blah, blah, blah. And then we will have conceded our authority over driving to some extent and we'll be more comfortable with that level of artificial intelligence having that power. Yeah, it's like a think tank thing. You know, black box, think tank, and you put the question in and you have it come up, all the data it asks for and come up with an answer and then I bet you'll find some good decisions. And to some extent, as you say, because it's not centered in one thing, we talk about the black box, but it's really a million black boxes who are chatting with one another. It gains some resilience against hacking in that way, right? Because you've got these autonomous units just like every self-driving car is sort of watching out for itself, right? And you teach it to resist bad influence, right? You teach it to resist hacking, you know, ideally. So we're out of time in, Ethan, but let me ask you to turn to camera one and tell the people, including those kids out there who might be interested. This is Ethan Allen, our chief scientist, giving you his summary and advice of what to do about, how to think about AI. Go for it, Ethan. It's sort of like thinking about science. You want to be skeptical. You want to be curious. You want to be open-minded, and you want to make evidence-based decisions. There we go. There it is. I think we really cover some ground. All right. Science is great. It's a need. Likeable anyway. Thank you, Ethan. Thank you, Jay. Aloha.