 Yeah, well, we're back. I told you we'd be back, and we are back. And Matthew James Bailey is back. He joins us from where? Colorado, is it? Yeah, that's right. Good evening, Jake. Great to be here. Great to have you on, Matthew. Matthew was an AI expert, and he's qualified in many, many things that spring out of AI-written books about it. The World 3.0 was the most recent one, which I have, which is a great book, to have you understand that there are very few limits to AI. Unfortunately, we don't use it enough in the, what do you want to call it, the social management government context. And if you don't mind, Matthew, I'd like to talk to you about that today. Because you're always asking about the ethical guardrails on AI, and of course it needs ethical guardrails. But you wanted to take an analysis of Hawaii first. So let's start there. What were your thoughts about how AI could be involved in issues in Hawaii now? Well, first of all, how does the self-determined future with artificial intelligence, does it want to be driven from the agenda of the mainland? Or does it want to self-determine its own future and honor its cultures and take them into the future? And to create a prosperous Hawaii, where artificial intelligence is working well for the different communities within Hawaii, and also helping the businesses, education, and other sectors like the military, et cetera, to move forward into a new tomorrow for Hawaii? So the question is, does Hawaii want to take control of its future? Or does it want to be beholden to the mainland? Yes. And that's a really good question. It's not clear that anyone, aside from you and me here today, will ask that question. But there it is, though. There it is. It's determining policy by AI. It's not mathematics. It's not figuring out some numerical data. It's determining much larger issues. And so when you ask about what Hawaii wants to do in the world today, whether it wants this identity or that identity, AI is perfectly capable of doing that. And so you say that, and then you, by implication, you're also saying, what about the country? Yes. The country needs to determine where it wants to go. And AI could help it do that. But again, the same problem, aside from you and me asking that question, who else? Yes, so quite things have happened since we last spoke, Jay. So the National Security Council on Artificial Intelligence, I was invited to a private roundtable with them and ambassadors from all four countries around the world. And they put together an incredible proposal, $32 billion, Jay, of how to mobilize America to be able to become a global leader in AI and to protect the future of democracy abroad, basically against the Chinese threat, which is outdated in the media, quite frankly. And also what's also happened, Jay, is that the state of Ohio have effectively digitally succeeded from the federal government with new data ownership laws for the citizens of Ohio. So a lot's going on, Jay. You know, it's funny that you say that, because tomorrow we're going to have a show about Pegasus, you know, the NSO, the Israeli software that somehow got into the hands of the wrong people who were supposed to go to responsible licensees and then it got out. And then now it's being used for surveillance. And it strikes me, and I really like, I would like your view of this, is that, you know, if not so much of accumulating data anymore, that's like the last generation. It's processing the data. It's making sense of the data. It's learning larger lessons from the data, millions and billions of pieces of data. And the secret is not so much gathering it. We know how to do that. Pegasus can do that from every man, woman, and child in the world. It's what do you do with it then? You agree? Yeah, absolutely. Guess how much data is generated globally? 2.5 quintillion bytes of data is generated per day across the world. Jay, I can't even count those zeros. It's astronomical. So you're right, collecting data is something that we're really good at. And to your point is how do we take that data and put it into ethical AI models, order to be able to make our systems more efficient, to be accountable, and to start to do well in society and help us leap beyond the challenges of today of inefficient human-centric systems. And government tries its best, but it's pretty inefficient, into kind of more of a streamlined society where we're actually kind of moving ahead in a fantastic life and a flourishing experience with the environment, right? AI itself, Jay, maybe in about a year's time, we're cognitive AI, AI that has some kind of reasoning like the mind. AI might help with government management and government decisions. They might even choose who's in the government. Now that would be interesting, wouldn't it? Yeah, and it could make policy decisions. But let me go further about my day, Matthew. A couple of hours ago, we had a show about Facebook and why Facebook has failed to identify the disinformation that is being parlayed on its platform. And a user, a viewer sending a question says, how can you tell, how can anyone tell whether a given piece of data, a report or some posting on Facebook is true or not true? I said, easy. He talked to Matthew James Bailey. He'll tell you how to do that. He used AI to determine a threshold of whether that is true or likely, not to be true. Now you may have to have a human committee to look over it on the second step, but on the first step, it's really easy to identify suspect information. And we could solve the problem with Facebook and all the social media by using AI to identify false statements. And we'd be a lot happier for it. Do you think it's possible? Would it work? Would it be helpful? Yes, it would. It would take a little bit of time, but certainly the mathematical models can be developed, Jay, to be able to train AI using the data to look for false signals. As Mark Zuckerberg used this in his question, he talked about signals. And so, yes, we can use AI to detect these signals. Now the question is, Jay, how do we deal with speech? How do we deal with the First Amendment where people genuinely have a different view maybe to the mainstream view that's been adopted by most people, i.e. get a vaccine, right? How do we deal with that First Amendment? Because what I'm seeing out of the US government is government working with Facebook to control the narrative on social media. And that is a First Amendment issue, isn't it? Yes, it is. But you know, one solution on social media and on media in general is that we have to remember the First Amendment separates the media from the government. In other words, it's the government's obligation to allow a free press. It's not your obligation or my obligation, which creates a bit of a problem in the sense that we may not have the same view of truth. However, there is a way to fix that hole in the boat. And this has been considered by Congress, but then as you may know, Congress doesn't do anything. So they didn't do anything about this. This is the provision that would allow an individual to skew the platform or erroneous information that was posted there by somebody else. Private law student, private law student. And it makes the platform responsible. Right now under the FCC rules, I think, and other law, the platform can say, not my fault. I've merely repeated what they posted. It's not my fault. If that changed, then the policeman at the elbow would be the consumer, the newer, who says that's false. And I might add, by the way, that everybody's asking, why is the right-wing media now saying it's okay to take the vaccine? Why are they doing that? They changed their position. Why are they doing that? And I think the reason is, there's four or five reasons that come to mind, but one of the reasons is they are afraid that somebody who listens to them about not taking the vaccine will die. And that person's lawyer will say, well, you know, you listened to Fox News and you died. And they knew that what they were telling you was not true. So you have a cause of action, just like that voting machine case. You know, you disparaged the voting machine company, you lied, and there's a regular civil action that gets you from lying and doing damage to somebody. Well, it's the same thing here. I think they realized, or somebody pointed out to them, or maybe there's already a suit by the family of someone who died after taking advice from them on COVID. And that's why they're changing their tune. But in any event, you know, it seems to me that the answer is not government action, except insofar as the government's saying, if you, Matthew, or me, Jay, we feel we've been lied to and we were lied to a detriment on that lie, and we can go after the platform, and that's a personal post to it. I would change everything. Yeah, it would. And in the book, we talk about personalized AI, or digital buddy. This week, I was speaking to the UK's leading wireless group, literally geniuses, in invention of wireless technologies about the personalized AI. And it was great. It was a really good PhD discussion, but they loved the book, which was great. When personalized AI comes in, then that will ensure that kind of a truth barrier between what the digital world is saying and what our own truth is, and ensuring that we get the right truth and detecting whether we're being misled by false facts, example, or news, or we're being misled by a particular agenda from a particular big tech company, for example. Personalized AI will assist us to actually recapture, if you will, critical thinking and the ability to actually choose without too much influence from this big world and from this world that is rich in contrary views, should we say. Well, you know, it strikes me that if somebody is giving a speech, say on television, and, you know, is doing disinformation. I use the term disinformation because I think that's really what it is. It's not misinformation, it's lying. Disinformation. Sure. And so, there's a lie. And now AI, as you can tell me, AI works instantaneously. AI is watching, you know. So AI is watching what he says. And so, you know, it's like in Hawaii, we have ratings for restaurants. So if you're considering going to a restaurant, there'll be a little green sign posted. You know, this is good or not so good, you know, a gradient of good. Well, you don't have to take it off the platform. You don't have to shut down the platform. You don't have to exclude anybody. All you needed to do was warn the public on a gradient basis. And I would say, Matthew James Bailey's AI has decided that this is on a scale of A to D. This is a D. You shouldn't really believe this or vice versa. And the bottom line is the person stands advised, what do you think? Yeah, it'll go further. So the personalized AI will say, hey, Jay, these are the nutrients that are really good for you today. This is the food that you like. This is how far you want to travel because of your, maybe your diary or whatever. Let's look at the restaurants within those kind of parameters of choice. Where can you get a table because you enjoy these type of views or you can type this type of AI can manage that for you. And there's no reason why people don't, we can't do this today. It's not that difficult. The key to success are two things. One is proper data governance so we can liberate more data ethically, train AI ethically, and then it can make a decision. So that is important data governance. And secondly, is that need to start thinking about the well-being of society and citizens as opposed to the bottom line moment from moment. And this is the new mature mindset, Jay, I believe we need going forward, which is about a kind of maturing as a human species and starting to redefine what value really is and what the true currency of human living really is. Yeah. So let me introduce another thing. And that is, and there's many things to follow, but the next thing that comes up is Michelle Goldberg wrote a piece for the New York Times a couple of days ago. And she wanted to, it was an op-ed piece, and she wanted, they don't call it that if they call it an essay or something, but she wanted to deal with the issue of why people are in Trump's base, why they follow him mindlessly, why they, she is talking specifically about a group of people called the always Trump state people who will travel across the country to attend his rallies. They don't have a pot to pee in, but they travel across the country to attend his rallies. As it's interesting, why do they do that? And all these other people, you know, believe him no matter what he says, and her answer, and it's not as simple as I'm portraying it, but her answer is because they're lonesome, because their lives are empty. And he fills them up with something and it's demagoguery, it's demagoguery and it's authoritarian government and it's a tyrant, but that's the mechanism, that's what works. And by question to you, and I never said this was a rose garden, this is a hard question. What can AI do when you have 70 million people who are being confused that way? And largely because their lives are insufficient, they're lonely and they have no friends, you know, in a nuclear society, we don't have as many friends as we did 10 or 20 years ago. We live alone, we watch the television, we sleep, we eat movies, it's not as rich as it was. And so there's plenty of opportunity, plenty of vulnerability for a guy like Trump to take over our thought process. What can AI do about that? Well, first of all, is that there's people outsource their sovereignty because it's easier to follow than do critical thinking and make decisions for ourselves. And it goes on both sides of the aisle, okay? And this is symptomatic of the human race in general, is we outsource our sovereignty to others. And that's fine, but is that good for our well-being? The other thing is, Jay, is that AI can help us to discover things we're really good at that make us feel passionate and make us enjoy doing and being in life, okay? It can help us to find our gifts, maybe it's in the arts or gardening or being in community work or being a theater or whatever it may be, it doesn't really matter. AI can help us to find our gifts at which point then you're nourishing the individual emotionally, at which point then they're no longer distracted from this empty space inside that is wanting to be fed by an outsource, outsource the kind of message, if you like. Does that make sense? Yes, yes, yes, exactly. And let me take it one step further, if you don't mind. So Joe Biden and Don Lemon today, right now in Cincinnati, they're talking about these adjustment things that the government has done, would consider doing, may do later and all that. And for example, there's an issue about whether if you pay a high unemployment compensation which the government is doing right now and in fits and starts, you are searching people going back to work. So these are, it's numerical, but it's also policy and you wanna tune things in our world, in our country so that you give them the right benefits. You start the benefits, you stop the benefits, you change the benefits so that when the economy is at a correct time at the perfect time, they will go back to work. Problem now is that arguably some people are not going back to work. So a lot of small businesses can't find their stand. They can't really do the job, even though there's a demand. And so I'm thinking that AI could get in there on a governmental level and say, okay, well, should you give them $300 or $250 or $350? And can you change it? When do you change it? And then when you've done it, you have to get data and see if that's working. And if it's not working quite right, you adjust it again. Talking about 330 million people, you can get the data, you can figure out what the economy is from town to city to county to state and then you can adjust this so you can create a very adjustable economy. And furthermore, you can beat inflation if you make it smart enough. I suggest to you that the Matthew James Bailey AI machine could do that, we could develop policies and we could coon the policies so they work perfectly. Am I right? Is it possible? Yes, yes it is. And I mean, if you look at Wall Street and the Stock Exchange, they're already using very complicated machine learning and AI algorithms to manage the whole fiscal kind of moments in America and globally. So complicated models like that in terms of determining how to operate in markets which would be internal markets within the US is certainly possible, it ain't difficult. Well, it's a big task, but it's not rocket science and all it takes is investment and the right mindset. This is what I think we need to do Jay. I think we need to excite the American people. It's time to excite the American people. It's time for a mature leadership to take America into the future and to say part of that future is gonna be with artificial intelligence. So let's look at our educational programs. Let's start teaching people critical thinking and getting them ready for AI of the future and AI jobs of the future. Let's start working with small businesses and bring them into AI centric businesses so they can compete with Amazon and maybe do better. Let's start looking at what's destroying the fabric of American society or its diversity and let's start using AI and a new mindset to start to move us into a new paradigm. And that kind of leadership I think Jay will work. It does take time, it does take investment but it ain't impossible. All it takes is a new set of mathematics and the right approach and we'll be able to do this. The problem America has, Jay, is that it's governance of data and it has no national governance of artificial intelligence by the way. It's behind. Europe has a policy where every single AI must conform to four levels within two years. That's for 550 million citizens. Well, I think it's about 550. It might be a bit less than a few days. So there are regions equivalent to the size of the US that are tackling AI to make sure it does well for the societies within that region, this case in European Union. And to be honest, within the US, there's some great initiatives part of the national security and artificial intelligence. There's no leadership. There's no leadership into this AI centric future, Jay. And that's what we need. Well, let me go to the next level of my inquiry with you, Matthew. So right now, it's a long shot on making any changes, good, bad, or otherwise. And because the federal government, especially the Senate, is blocking everything. They're blocking infrastructure, which is but a motherhood kind of issue. It's unbelievable they're blocking infrastructure. But one thing that we should be doing, and many countries have learned the benefits of doing this is to know the truth, to know the truth. So for example, if I wanna find it out with Paul Proctor back when Cambodia, and I had AI and a lot of data, I could find out what was going on. I could find out what led to it and who was involved in it and what they did and how it affected the country and the sociology and the culture and all that. I can learn a lot. Likewise, in any remarkable public social event I could do an investigation using AI. You know, when I was in the service, I was an investigator, I gotta tell you, not that hard to investigate. You get all the facts, you organize your material and the conclusions that come by themselves. It's not hard to do it. And it's a lot easier to do it with AI. So for example, we're gonna have this, some people say a circus in terms of Nancy Pelosi's commission. She's also got some strange members on her quote, bilateral house commission. But suppose I said, forget about that. That's all political. It's all, this is Birkus. Why don't we let the Matthew James Bailey AI machine go in there and give it all the data we can possibly give it. The videos, the television footage, all the articles that have been written, all the statements that have been taken, hither and yonk, just feed it as much as we have and let it decide what happened, who was responsible. You know, the extent of the event, the extent of the problem, the connection with, you know, our democracy and government and what can be done. None of these are rocket science, but if you put the bureaucracy and the politics in front of finding an answer, you don't get an answer. The AI is not gonna be concerned with that. You can make it clear and clean. So my question to you is how far away are we from finding the truth? Not only here for January 6th, but in other places where there have been war crimes, atrocities, governmental failures, to find out and tell the people, because they don't know necessarily, because there are doctors who would prevent them from knowing, but if you put an AI investigation in place, you could advise them fairly and squarely what happened? What do you think? Well, I think we'll see artificial intelligence assisting government to be more efficient and making government more accountable, Jay, and also the efficiency of their policies and suggest changes. So we'll definitely start to see that. Whether that passes through Congress, who knows. So we'll start to see that. We'll start to see AI as a digital policeman, protecting people and doing crime investigations. And so we'll start to see that emerge and we'll probably see AI monitoring the quality of Facebook and social media in terms of how well is it doing and what is the impact on society whether it's positive or negative. So we will start to see AI in a kind of a guidance role within different aspects of a nation from government through to services within society and how citizens are kind of experiencing that. But we don't want to enter into the Chinese approach where everybody's surveyed and monitored. We don't want to go into that because that is potentially non-democratic, but it is non-democratic. And it's also 1984 in reality. So there's a fine line to go, Jay, where we keep AI democratic centric, which is what I talk about in the book, to assist us to advance into a better future. But we don't want it to be 1984 Georgia all well, do we? No, because there are unforeseen, well, maybe does that make sense under Orwell, foreseen consequences, but even things that he did not foresee. That's right. You know, we could have the 20th century and, you know, Adolf Hitler all over again, but much worse, and if Hitler were alive today, he would be looking for AI, run Germany, and he would be using it. He certainly would. We'd have Alan Turing inventing a new AI to help us to get past this kind of all these issues, which is kind of like a bit of a war in effect, re-controlling and getting our sovereignty back in the digital world. I can see Alan Turing working on some very clever mathematics for us to regain our sovereignty again. Absolutely. So one last question to ask you, and you mentioned a minute ago that, you know, Europe, with its 500 and some odd million people is ahead of us. And I would venture to say that China is ahead of us because when China puts its head down on something and it did put its head down on AI maybe 10 years ago and it deployed old kinds of resources to being best in the world on AI as well as hacking on my dad. So, you know, they have their ahead of us, but let's assume that we put resources into this. Let's assume though, we make up our, let's assume that the government reads your book. What is it that the world 3.0? You have it handy, hold it up so we can see it. Do you have it? Sure, I do. That way, yes. Inventing world 3.0, okay, it's in there. So let's assume that we all get the message about the value of AI in this country, in this country. To solve all the problems you and I have been talking about but many more, what are we missing here? What are we missing to get there? What are we missing? What do we need to have to catch up with Europe and China? What do we need to solve all these problems? AI for the United States is a little behind the curve. How do we get ahead of the curve? So a couple of things here. First of all is that the US has invested $50 billion in a semiconductor manufacturing in the United States of America. And that's very important to keep semiconductors that will literally be the AI brain J in our computers within domestic shores, that makes sense. And so that's a very good move by the American government and Intel and Nvidia are probably number one in the world for AI and semiconductors. So the US is doing some good stuff and also about raw mineral supply J that make the semiconductor, so it's doing some good things. What I would say about China, there's a couple of things I wanna say about China. First of all is that they are using AI to ensure that children are not spending too much time on gaming. So they're actually looking at the mental health which is an interesting thing. And they announced recently a research lab dedicated to AI in environmental and sustainability applications. So some interesting things going on in China before we start raising the flag and all that kind of thing. What this country is missing is leadership. It's missing leadership into a new tomorrow. That's what it's missing. There are wonderful, there's some incredible debates in Congress and the house that I watched. Not all of them, it's not any government institution. Some of them are really quite silly. Some of them are really quite profound. But what we lack, I believe is a mindset take us into the future. And every region, every state needs this mindset. Leadership take them into the future because J AI is not going away. And so having leadership that takes the cultures and the citizens into the future where everybody's nourished and there's no reason why we can't do this, J. That's what we need. That's what we're missing now. And the European Union is starting to show signs of this. Well, that does drive me to one more question. I'm sorry, Matthew. That's okay, it's fine. I really enjoy talking. And the nature of energy here in Hawaii, one of the big touch words is distributed energy. And that means if you're going to do clean energy and you want every household to have a solar panels on the roof. So then it's distributed. It's not a hub and spokes out to them. It's they have their own control of their own technology. And I'm thinking that this would be made easier in the United States or anywhere if it could be organized on a distributed basis. Meaning, I have a little town in the middle of Missouri. I'm picking them because they have a lot of problems with COVID right now. A little town in the middle of Missouri and they have the ability to run AI for their healthcare system or their water system or the sewage system. And they don't have to bring in the big boys from New York or San Francisco or LA or anything. They can do it themselves. Now, is AI and I know that computers are getting better and the transfer of data is getting easier and the software can be passed along from place to place. Is it possible in the future? Do you think this will happen where AI becomes distributed where every little town in Hamlet will have opportunities? Absolutely. And the book talks about how we can achieve this democratically and ethically. And the key thing is states and counties should self-determine their future with artificial intelligence rather than beholden to government agenda, okay? So yes, AI will be distributed everywhere. And I've talked about this in films. In fact, we won an award recently for one of our films, talking about this actually. So J, it will be everywhere. The key thing to ask is this, do we want AI to be part biologically integrated with us or not? And I don't think it should be biologically integrated. I think the human experience, AI should be sitting next to it but integrated within it. I'm not sure about that. I mean, we're talking about transhumanism now, James and kind of cyborgs and kind of the ball collective. But AI will be distributed. And the reason for that, J is very simple. It can make decisions faster at the edge. It can make decisions quicker in our lives. We can understand us better for our benefits under our sovereign control. Having AI close to us can have tremendous benefits so long as we do it ethically and we do it with meaning and purpose and sovereign for the individual. You know, you raised one more issue and I promise this is my last question. One of the problems that I have in our world today is that we don't seem to be able to remember what happened before. The old thing about a he who forgets history is doomed to repeat it. And so when you have these policy organizations like Congress, those guys can't remember what happened two weeks ago. And the same thing with state legislators. And for that matter elected officials are even judges. You know, they're younger than we are, Matthew. They really are younger. Sorry. But the thing about AI, it seems to me is that AI and look back, it can remember. It can draw upon data that is from before. So it doesn't lose the benefit of the human experience gone by. What do you think? Right, so AI actually, this is a really good point. AI can be protected never to forget, right? Now it can be influenced. So what the important things from the past can be diluted, but you're right, AI could be a tremendous storage mechanism. One of the big issues that I've seen on social media this week is AI replicating Anthony Bourdain's voice as part of a media film, I think it was. And the who hard, if you don't mind me saying that on social media was really interesting to watch because some people said, well, that's okay. It's cool. The other one was like, no, that's terrible. And so we are going to see this remembrance issue becoming part of a social dialogue, Jay, where let me ask you a question. If in 50 years time, maybe you and I will be speaking, but it won't be you, it'll be me. It'll be our digital avatars, our personal bodies having a conversation. Now, this conversation about remembrance of ourselves and it being kind of eternal, that's an interesting philosophical question, right? Yes, yes. Well, there's a Holocaust project. I think it was started by Steven Spielberg that does that. What happens if they interview these Holocaust survivors, they get the answers, they get the words, they get the facial expressions and the body language and then the fellow dies. And from the video and the audio, what they collected in response to these many, many questions and answers, they can recreate a conversation with this person. It's quite amazing. It was on 60 minutes. And I'm sure that from the time it was on 60 minutes till the time you and I look at it next, it'll be way better. And so what you're talking about is already to an extent already happening. Oh, Matthew, I really enjoy these conversations. I can't tell you how it opens my mind to talk to you. And I hope we can do it again, promise me. Yes, of course. I would love to be back. Thanks for the opportunity, Jake. Aloha. Matthew James Bailey. He was very well-called by person in AI and we're looking forward to talking to him again. Aloha.