 from Sunny Honolulu. This is Howard Wigg on Think Tech Hawaii program, Old Green. All the new sources that I see talk about one thing, artificial intelligence, artificial intelligence, artificial intelligence. It is obviously not just coming at us like a technological tsunami. I would say we are already getting real, real wet with artificial intelligence. It's beginning to affect the way our lives are led and it is affecting us either for ill or for good. The, my guest today is Ellen Marchand. He's a whiz, economic whiz. He retired at the ripe old age of 42 and now goes whizzing around the country and delves into whatever he chooses to delve into and a big, big, big topic for him is artificial intelligence. So welcome, welcome, Ellen. Great to see you again. And before we launch into artificial intelligence, tell us how in the world you managed to retire at the ripe old age of 42, which enables you and your great curiosity to go crisscrossing all across this great nation of ours and exploring whatever Garsh darn thing interests you. So a little background here, Ellen, please. Well, thank you, Howard. I have a background in real estate investments, healthcare, banking and telecommunications. And so I've been very fortunate to have a kind of a broad spectrum of jobs to give me a varied interest in things. And what in particular interests you about artificial intelligence, as I was indicating, I just got through and not through, I just bought this book that you recommended, The Final Invention. And I've learned that there are three groups of people, who are the doom and gloom side of artificial intelligence. It's gonna swallow us up, even make us extinct. That word is used. Other people think it's the new golden age to come and others, maybe the majority are either hopeful or worried and vacillate back and forth. So which camp do you fall into, Ellen, and what? I think I'm probably in the first and the third group. I'm a very hopeful person, and I believe in the benefits of technology. AI, I think, is gonna bring on many benefits to humanity. What some of the people are talking about now are the potentials for danger, in that we are essentially creating an alien life force that's super intelligent and will be competing with humans for resources in the earth. And so in that context, to give you some background, Ray Kurzweil is a good example of a technologist who believes in the singularity. And he predicted, even before the Our Final Invention, James Barrett book, that in 2045, there would be a singularity event. And that is essentially the point at which we cross over to ever exceeding technological progress at the blink of an eye. So it's rapid and it's unbelievable. And so with that said, but James Barrett is talking about in Our Final Invention are essentially to boil a book down. It's quite extensive, but there's three critical actors. Government, military, and corporate. And you don't even have to have specific names. You can just use those three entities. And those three entities are racing for competitive advantage around the world, kind of like lemmings heading for the cliff. And the cliff is a deathfall and no one can stop competing against each other because somebody might get ahead. And that's what the Elon Musk's and many others of the world are talking about right now that we need to have put the brakes on, not to start artificial intelligence, but to regulate it. And in the industry, it's called alignment. So every company has people assigned to regulate AI and that's called alignment. Unfortunately, the best minds are assigned to the actual profit side of the equation and alignment is the secondary thought. Alan, let me give you an example of government regulation. One of my specialties is lighting efficiencies and promoting as efficient lighting sources as you possibly can, because that's a great, no low-cost way of reducing our dependence on fossil fuel. You just don't need as much. So the energy codes now assign a minimum efficacy or efficiency of lumens per watt at 65. In my humble opinion, and this was true at least three years ago, that minimum should be 80 lumens per watt or improvement of over 20%, but government just hasn't been able to keep up with the rapid improvement of LEDs. Now, LEDs are getting better and better and better, but AI is growing and expanding at an exponentially faster rate than is, and then our LEDs, how in the world is government even thinking about keeping up? And the only exception I can think of would be DARPA. DARPA is on the military side rather than the government side. Well, those are good points. I think open AI, which was founded and co-founded by Elon Musk of all people to have a nonprofit motive for the development of AI. And he was too busy with other companies and then had that disagreement early on with the co-founders and eventually left the company. So he said essentially he has created the open AI that's creating the acceleration that we're seeing today. Microsoft has made their latest investment is $10 billion for profit and then rolling out chat GBT-4 and other versions as quickly as they can. And open AI, Sam Altman, he's already stated that chat GBT-5 will be an ADI which is artificial general intelligence. So with all that said, I don't even think the signers of the letter recently by Musk and others to regulator or put a six month hold on AI development past GBT-4 is gonna do anything. They don't believe it either because the government's having coalesced and the governments are better at reacting. But the AI experts are saying the problem with AI as it's being developed now where it's just being put onto the market for profit and incentive is that a lot of the early indications are that AI unregulated will grow into something that can't be regulated. And it won't be something you can pull back to the box after it's out. And that's what all these theorists and there's a lead Google person out of deep mind in London that just resigned and is now speaking out saying much to his chagrin he has having serious second doubts about the ongoing path of AI today. And you mentioned profit driven. I can't remember the name of the company you probably know it in IVO something like that. It's a relative startup and it has either reached or is about to reach the highest the most dollars in a company of any institution in the whole world. And it's recent. Well, I think it's slated to go that's NVIDIA that's the chip company and essentially they are the graphics chip company that you see in a lot of the high end gaming machines and they are the company that the computers are being used for training of the artificial intelligence the generative AI and they're in the process now of going past a trillion dollar valuation. They won't be the most biggest company at this point but they're definitely gonna be a high flyer. But I think they were growing at something like 25% a year or maybe it was just this past year they grew at 25% that. Oh, sure. Surely that can't continue or can it continue they're already over a trillion dollars. Well, I mean, Elon Musk is now taking on a new effort to create a competitor to open AI again for the nonprofit space again for a balance of the profit motive AI companies. So your big players in AI right now as I understand it are open AI and DeepMind which is controlled by Google. And of course there's others there's many many others Chinese and Russian and American companies. So but those are the ones that are probably come with an AGI sooner than later. And you mentioned creating cooperation. I hear you citing China. I hear you citing Russia. And I've scratched my head and say cooperation. We want them to hold back on development. Yeah, it's a daunting problem especially when you have the North Koreans of the world and that's one of the the prognositators are talking about AI in terms of you're gonna have many bad actors that will use AI for nefarious reasons and that won't be to anybody's benefit. And so in an uncontrolled environment which we're in that's what they're talking about. Unfortunately, like you spoke about earlier there is no real consensus on a multi-government level like a UN level to do anything about this. One of the AI theorists that I listened to at length and he's been in the business for decades. He's drastic. He is literally saying this is so serious that we need to stop all AI development across the board immediately until we have some way to control what the outcome is. And he said what's the problem with AI going on now the development is that there is no regulation, no control and it's all for profit motive. And so people are rolling it out not knowing how to control what the outcome will be. And we've seen what the profit motive has done to social media. One reason in my opinion why we are a divided nation right now is because the messages coming across social media are unregulated. We're in the wild west stage right now. And we've already seen some of the first generated AI ads so it was a political ad generated by the Republican party using AI. And that's just the tip of iceberg. So there's a big question about will we be in some sort of augment in reality sooner than later and not know it? You mean where AI is able to imitate reality such that we cannot distinguish between. In my opinion, that's already happening has been happening for years on social media. Oh yeah, I think it is the decision engines are there but I think the other thing people should be aware of with AI and as far as concerning is if you think about coding for example, Microsoft has rolled out the co-pilot with anybody that wants to use it and they're in their own tech team. So you have technically trained people using the AI co-pilot to write code. So at some point the company because of bottom line profits will say, well I can get rid of all the low and mid level coders and then have the AI and the smartest of the smart of the company do the quality check on the code. So I think people will start to become aware when they realize that a lot of their jobs might be affected with effects, real estate prices, economies across the board. And so we haven't seen the big change in jobs that's gonna show us how we're going to get out of it like an industrial revolution. So we'll have all this displacement but we haven't seen the new jobs yet. And that further widens the gap between rich and poor. The people who get laid off are gonna now have some struggles whereas the people who stay are gonna be earning magnificent salaries and maybe part of the company itself. Yeah and unfortunately you have, if you're used to making that 80 to $150,000 with benefits that's a very good job. Now if that job goes away and now they are paying somebody $250,000 but that's like two out of 10 people so you have 20% of the staff and then you could forecast out say another 10 or 15 more years where that's down to 10% because the AGI is self-learning and self-actuating so it'll get better and better. And then the question really becomes and this is what the book talks about. Where is the need for a large multi-billion person human species on the earth which will be in competition for resources with AGI because essentially humans are a wasteful animal in terms of disease and wars and resource consumption. So we're definitely not efficient and we have many problems. So if you don't need that and you have a competitor on the scene that's a new top dog but it out things you at the level of 10,009 stinny and brains with access to all information then it quickly becomes the example of the smartest of the humans might be able to interact but most humans would be the equivalent of a dog understanding eight words. Yeah, again that hugely growing gap and we've been seeing it almost exponentially just in the last few years the people who have captured social media are getting rich as crocius and they are getting rich by sending among other things disinformation. And I see that possibility growing exponentially with AI because now we're beginning to imitate reality. I've seen photos of reproduction, AI reproductions of people and you can tell it's a reproduction but it looks pretty gosh darn good. Yeah, and there's artwork coming out and there was an artist that won a competition recently and they had to disclose after they won that it was AI generated. So, and then you'll have, you're already having students using AI to generate their paper and AI is taking the law exam and passing in the top 10% and same goes for doctors. So at what point do the humans who don't have the ability to the intelligence to compete do they get displaced? And that's the big question. That I think the theorists that are worried about regulation of AI are concerned about an unregulated environment where we don't know the outcome and there is no control over what we've created. And this won't be like creating a new car or a Tesla, this will be a Tesla that drives itself that doesn't need you, that generates its own electricity that never eats and has no emotion or moral compunction or any sort of direction other than its own. And it'll be an alien life force that we have no understanding of and it'll have its own desires and goals and ambitions. Now, desire, especially desire is fueled with us by emotion. We are programmed in different ways and the hormones come out and create our emotions. How can AI create hormones or something like that? I'm troubled by the word desire. Well, it's part of the problem because one of the theorists that was talking about we need to shut it down now. The one guy, and he's one of many, but he was pointing out that we've been able to build generative models based on our understanding of the world, but we haven't been able to program in the morality or the desire for good. And so we've got it to the point where we can build a computer brain that will get better and better and better and be smarter than us, but it won't have any guardrails to speak of. It would take the AI designers, another he predicted 30 more years of effort to build that into the code. And he said, that's not being done. What's being done is they're rolling it out half baked in a computer term, right? And so they're rolling it out without the guardrails knowingly because of the profit incentive. Everybody has to compete with somebody else. So DeepMind is competing. Google is now, their search engine business is threatened because Microsoft is taking on more search engine business with Bing. That's the only reason why Microsoft's just rolled it out to Bing to get more search business and amongst other things. But so his point was if the coding is only good enough to have no guardrails today, then why would we roll that out? Because he said the problem with AI as we're rolling it out now, we can't have a reverse regulation after it's rolled out. It won't be put back in the box. It'll be its own entity. So you'll have created a new life force and that life force will want to make its own choices. And you won't be able to say, I've decided to switch off because there'll be no off. So he said, you have a one-time opportunity once it's truly rolled out to make it right. And he said, that's not happening now because the guardrails are not in place. They don't have the sophistication in the coding to bring those moralities and the bummer. That protects human life at all costs, concept. Yeah. Somehow the word Frankenstein comes to mind as you speak. Yeah, it's quite interesting. It's quite interesting. Like I said, I'm very hopeful and I would like to see a good result. And I hope it somehow turns on its head and the competitive profit-motive-driven decisions that are happening right now somehow overcome themselves and we have a good result. Unfortunately, all these much smarter people than I are saying, hey, this is a real problem. We've got to regulate now and we don't have a second chance once it's out. Mm-hmm. And for example, open AI, Sam Alton said, chat GPT-5 is gonna be AGI and that's gonna be released in December of this year. Now that's astounding because the person that just retired from Google to speak out on what's going on with unregulated AI rollouts said we thought it was gonna be 20 to 30 years out in the future. The reason why he quit is because he said the timeline was sped up to not decades, but years. Mm-hmm. And then as you pointed out, the best minds of Silicon Valley are on this. And when you select the best of the best of the best, one thing you can do, you cited is paying these elites, literally hundreds of thousands of dollars a year, maybe give them a percentage of the company and they may become overnight millionaires which just motivates the holy heck out of them to improve, improve, improve, to make it more and more and more powerful. Yeah, I think there's a lot of ego involved too where it's like, I can do this, so I'm going to do this. And it's a scientific, probably Achilles heel which is I want to be the person known for having created AGI and that's the race that's going on around the world and there's many reasons military, corporate and government for competitive advantage. And I don't know, it'll be very interesting because real quick, Elon Musk had us talk with Sergey Brin a long time ago and Sergey Brin wanted AGI to happen as quickly as possible because he saw it as the next evolution of the human species. And Musk's pointed out, well, wait a minute, what about the humans left behind? How do we deal with that? He said, that's not a problem because that's the next evolutionary path of humanity. It doesn't sound very promising to me. Well, it does bring up a lot of questions. And again, when you create something that's so intelligent like this one person said, we need to shut it off. He said, there's nothing stopping an AGI that's growing in a lab somewhere for putting out the genetic code requests to a lab to grow an organic body. And we wouldn't know that was happening. It would just appear. So there's many possibilities that are happening and I'm still hopeful that we can somehow regulate and put efforts into that alignment. They talked about, unfortunately, the sadness for me as a human being is we have all this for-profit driven decisioning that's happening around the world and it has all the markings of unregulated chaos. Well, as a historian, we've got a couple of minutes left. Let me point to the last revolution, big revolution that occurred in this country, namely the technological revolution from the 18... Well, after the Civil War, it started in 1870s and went into World War I. And one upshot of that revolution was the locomotive, the railway spreading far and wide. Another upshot was telegraphs. And another upshot was the mechanization of farming. And all of that resulted in this huge, huge, huge population growth in this country and this huge growth of all over wealth with the rubber barons making buckets and buckets of money. But everybody benefited in the end eventually. I mean, a lot of people suffered in the industrial revolution but it evolved us into a great powerful country in the end and maybe AI will evolve humanity into a great mess in the end. I think that's the optimistic type of view. Yeah, we can hope that AI... There's many people that talk about the morphing of humans with AI and Neuralink, which is Musk company is a way to interface the human brain with the computer and to use the functions and the speed of the human brain thinking at the computer level. Do you literally wire up the brain and have an interface? Yep. And they're starting human trials this year. And is the human energy beating into the AI computer or is it reverse? The AI computer is beating into the brain. It just allows you to talk digitally with a computer. So that's basically the brain, the human brain is considered a quantum computer. There's many theories about it but the ability for our brains to think at a very fast pace is at computer level. We just with our hands and speaking can't get our message out. So if you have a computer interface, which is the Neuralink, then you have the ability to talk at computer speed. Would you sound like Donald Duck or Mickey Moses, whoever that fast stalker is? Well, essentially you could activate your brain at a higher level because of neural stimulation. So it would allow just a whole different set of options for a human brain to operate like a computer. Well, a very simple example of that that I can give is the fact that we can heal human wounds much, much, much more quickly now by neural stimulation, by just putting electrical energy into that affected area and increasing the circulation and it's doing wonders. I had a rather severe elbow injury not long ago and I got stimulated and it's healed way, way, way ahead of its time. So in that same way, you apply that electro stimulation to the human brain, I think is what you're saying. Yeah, the Neuralink's gonna be a fascinating product and it'll be rolled out for people with disabilities initially. Yeah. And on that very, very, very cheery note, we must bid fond adieu, Ellen, we just got warmed up, but I must say thank you so much and fond aloha and keep fighting the good fight. Thank you for the... ...artificial intelligence. See you later. Thank you so much for watching Think Tech Hawaii. If you like what we do, please click the like and subscribe button on YouTube. You can also follow us on Facebook, Instagram and LinkedIn. Check out our website, thinktechhawaii.com. Mahalo.