 Okay, we're back. We're live. We're here on Think Tech Tech Talks. Inventing World 3.0. That's because we have a lot of time in our hands, and we'll just attend to that today. My fellow who wrote a book about it, Matthew Bailey, actually more specifically Matthew James Bailey. Hi, Matthew. Nice to have you on the show. Great to be here, Jake. Thanks for having me. So you wrote this book and you've written a number of books about the intersection of technology and ethics and the future. And we really want to get your thinking about it. So I guess the first question is what do you bring to the table here? Have you been devoting your life to this? When did you begin getting passionate about it? Yeah, so yes, I have actually. For 10 years, I've been planning to write this book. But certain things have to happen beforehand, Jay, with the Internet of Things phenomenon and smart cities that I've been leading around the world. And that was to actually bring more automation and more data throughout society. So then we have the ability to train artificial intelligence with this different data and the huge tranches of data. So it's been 10 years in planning. And this is basically a new paradigm for humanity and artificial intelligence to advance into a golden age. And we can dive into what that actually means. But this really is a pivot on how humanity can leap beyond the challenges of today into an equitable and thriving and flourishing future where we have culture and diversity honored within our societies. The human experience is honored AI and humanity of partners, and we have a flourishing relationship with our environment. Well, you know, I think it's only fair that we explain to our audience how AI can do these remarkable magical miraculous things. You know, I went to a class once in AI, and it was very primitive. And he said, Well, this photograph looks like this photograph. And so we have a million photographs and after a while we figure out that what the comparison is and then we tell you stuff that we learned by comparing a million photographs. Okay. But I cannot actually visualize how AI can change the world. I'm sure it's way beyond what I learned in this class. Can you tell me, you know, the specific, most remarkable applications by which AI can change our lives and the world. Well, so first of all, for the audience, AI is a dumb technology. Okay, it can't feel love, it can't feel compassion may attain some kind of self awareness. But it very, there's very little chance of it attaining consciousness itself to quote the Dalai Lama consciousness is life itself. So I don't think we can digitize life. AI varies in terms of it's really boring applications, such as maybe and I mean it's not particularly boring, but there we are, where it's looking after and protecting us in our cars with self driving cars. Okay, it can be in our cyber grids ensuring that our telecommunication networks are protected and up and running. It can be used in genetics, where we're looking for those particular issues for human condition to resolve them. Natural language processes is in our Alexa and as the AI is in every aspect of our society. Now what we need to do is this Jay is that we're hitting some real issues where AI is showing bias. It's not understanding cultural diversity. And really it's not aligned with the purpose of humanity. So what do we do we need to liberate artificial intelligence from its prison. And how do we do that. Well, first of all, is that we need to democratize innovation. That means we need more people innovating. But more over, we need to start putting inside the mindset of artificial intelligence. What's our constitution in the US. What's what are our cultures. How does it honor diversity. And how does it honor humanity. And these are also with a symbiotic relationship with the environment. And so the book reveals how to invent a new form of artificial intelligence that even the citizens can understand how to do because we need to bring citizens into the conversation to be able to shape an AI that's working for our benefit. And not at home. Can you do that. Can you say to AI look I want to I want to give you some guardrails here. I believe in the Constitution. I want the Constitution incorporated in whatever you do. This is like the robot movies you know the robot program not to hurt people or whatever. And so, can you do that. Can you build that be complicated women to put the Constitution into an AI application. Well, first of all, is that if we go back 5000 years the Vedas talk about we are on family and the Constitution starts off with the people. It's similar. And that is inclusivity, it's diversity and honoring our human democracy and sovereignty. So there's certain aspects that are very, very common. So, can we put the guardrails and culture and cultural understanding in the official intelligence today. No, but the book reveals how to do this. It reveals new digital genetics for artificial intelligence on how we can start putting culture into and diversity and humanity into artificial intelligence. So when it's operating in society, it's knowing how it's meant to behave at a macro level that's nationwide at a sub macro level across the state of Hawaii. And also to personal level, because all of us have a personal culture, and it varies from person to person. So this is a whole new book of innovation to actually advance artificial intelligence and it's a huge amount of innovation for academia and business and military and to be able to start shaping artificial intelligence that moves humanity forward and not keeps us in the locked up paradigms that we have at the moment. Yeah, and the danger. Let's talk about the dark side of this the danger is that it gets in the wrong hands, and that person call him a dictator if you want. Come use the nascent unrestrained no guardrails form of AI to do horrible things. I mean there are people in the world who would misuse it for sure. Okay, so then you have two ways you have, you have the nascent version I call it that but you know I mean it's no God rails. And then you have the part that has that has the ethical overlay, the limitation of ethics, but really really important when it's so powerful. But the question I put to you is assuming, you know that you can do that you can put in the Constitution, put in the 10 commandments whatever. How can you keep it in there, because if I'm the dictator who wants to use it for nefarious purposes. I don't want those. I don't want those those guardrails. And I'm going to do what I can to avoid your God rails. I'm going to use it for my own. So how what kind of institutional governmental, you know, ethical considerations go into making sure that it stays safe. Yeah, so this is a great question so there's no doubt about it that America needs to protect its own borders. And that's its digital borders as well as its physical borders and this is where AI can can play a role there, where it can detect other countries that may not be hold to the same democratic values of the US and basically head them off digital border control, if you like or digital ice if you like. And so what the US needs to need to consider is basically having a clearinghouse for every single AI to comply with digital citizen values, talking about a digital citizen test for AI. So only AI is deployed within the nation, or even within a state where Hawaii might have its own guard rails to ensure that AI only complies with the digital constitution, if you will, the cultures, the values of the people of Hawaii and their vision. So we definitely need that. Now look the US government is doing a couple of things that are really quite cool. They've set up a department of artificial intelligence they recognize that whoever wins AI will remain a prominent player in the world and we've already talked about the competition with China and other types of societies that may not agree with the democratic values we have. We've also talked about trustworthy AI in every single doodle agency, and this is basically ensuring that AI doesn't have bias in its execution and service in society. That's a real challenge. So the US needs to define a digital constitution for AI, and he also needs to have a digital citizen test. One last other thing I think is really interesting Jay is that they announced a quantum computing initiative in Q3 Q4 last year. And this is important because quantum computing can create quantum cyber encryption to protect artificial intelligence and to protect the US borders from the various kind of invasion if you like from non democratic AI's. Well, that raises in my mind. Who, who can know AI. I mean, I told you my experience with AI is really primitive. I could not I can program in other languages, but not AI. I wouldn't know where to start I have to read a lot of books. How about you can you program and AI. How do you get to be a programmer and AI. Well, first of all, anybody can create an AI chat button in half a day. It's really easy to do. And there's lots of different types of languages and services you can use. A group and AI group have algorithms you can use to machine learning and other types of AI as well. So it is accessible to be able to program in AI. And that's important. You know 1% of all the population in Finland Jay are learning to program in AI. It's really interesting at 1% of the entire country that general public are being taught how to program in AI. And I think America needs to think something very similar around that. There's no getting away Jay that AI is here, and we need to steward its future, mindfully, to advance the society in America, and not to actually oppress it. You know, we've had a number of talk shows with people in Helsinki. And they are, they are so enlightened. They see into the future they have a different way of looking at things I mean both individually and as an institutional institutional point of view. And I'm so impressed with with them. But here we have we have a a call it a body of students in the United States. And I don't think 1% would know anything about how to deal with AI. We have a body of students in China a body of programmers if you will. And I think a huge percentage of the students in China would know how to do it because because it's in school. It's probably better than Finland. So the question is how, how do you avoid that disparity. How do you train up. I don't know how many students are on the US maybe have an idea. It would be in the tens of millions right now today this moment. How do you get them up to that level of understanding AI, and it's, and it's possibilities. That's a great question we need to look at the education system in the state and also nationwide in the US and ask the question. What are we training the students for what future are we training the students for what jobs are we training the students for how can they contribute to society and become innovators. So this is really important. If we're to advance in will 3.0, we need to democratize innovation. So for example, if the state of Hawaii created its own constitution for AI its own policy, and also it's evolutionary ethics framework for AI. Then basically, we could bring that into the innovation in academia in business, military and other institutions to be able to start programming AI for the state of Hawaii. And this could be done. I think if we try and do this nationally it may just be too much to do. And we've seen in the smart cities movement. Jay is the innovations coming back to the region regions and cities are running faster in their innovation than federal government. And that's, and that's not saying anything negative against federal government just they just have agility unless red tape. And also they see at the at the street level the real challenges they have within their cities and communities and they want to fix that. So, I think we have to change the way we have to equip innovators of tomorrow Jay, it's as simple as that. And we need to be mindful what future we're building the US for is it just a single bottom line, or is the future of the US based on people planets and profit. Is that the new standard for doing well now cloud Schwab in the time magazine last year, talked about triple bottom line or people planning profit as the new standard for businesses doing well. And so in world 3.0 what we recognize as we're moving towards it Jay is that actually we're building a destiny for humanity, and we're actually committed to that with an enlightened mindset as you're talking about from Finland. And I think that's where we need to head. Actually I have so many questions for you. This is really very provocative. So, what is world 3.0 if I, if I get up in the morning. And I walk out my front door. How does world point world 3.0 look to me as opposed to where we are now which I, I mean, we're all 1.0 or 2.0. So, first of all, everybody would have a personalized AI, a digital buddy, or a digital angel or a digital guardian. And this buddy is, is there for your well being, your nurturing, and also for your self realization, but it does a couple of things are very, very unique. It manages the digital world on your behalf, which puts you at rest. No longer are we on the phone so much. No longer are we engaged and pulled in different digital directions. It does it on our behalf. Point us rest gives us a good health. The other thing is, it guards you from infraction of your data ownership and infraction of your sovereignty. It guards your sovereignty and ensures your democratic right as a sovereign human being. So it stops the whole, if you like, this might be a little bit triggering, but stealing of our data, which is what's happening now, mostly because we're not involved in the process. It's completely unethical. There's no transparency. It actually guards your digital life and also supports the self actualization and the well being of your personal life. So that's one of the big things that's different in world three point out. I can see it because, you know, I mean that the power of AI is clear. And you could clean up the web. You could stop people from hacking. You can stop people from taking your data. You can stop people from doing, you know, malevolent things to you on the web bullying, for example, I guess do is set up the rules. So what you suggest that it's personal. It's for me to begin with, but I suggest to you Matthew that there's various levels there's a continuum here. So first we take AI and make it work for me, make my life better, safer, cleaner, you know, and I have to work less. Maybe I can get my Andrew Yang $1,000 a month from the government and I, you know, I'll be a sense you, you know, you put it all in the hands of efficient programming. It's so hard. That's, that'd be so for everybody. I see that as a future. We have, you know, this comes true. And then, okay, then beyond me individually, there's my city, you know, your whole approach to city and city planning and smart cities, all that. So I suppose a problem in the cities, it's about buses is about transportation is about clean air clean water. It's about, you know, security all that stuff. And then I put on AI solutions in my city now it's not me anymore. It's a community. It's not a very big community but it's a community of people. Okay, and then I'm really throwing out possibilities for you. Then I talked to my state my state has bigger issues yet. And I want my state to run really well so I take the AI, and I put it on still a larger platform, smarter more rules or possibilities. And finally I go to national governments. I go to global governments, and I run them according to rules that are maybe different more visionary higher altitude so speak for each one of these levels. We're not there yet, obviously, but we would start at the low end, and then see what what are your thoughts. You must have read the book because that's what he talks about. So, so there will be layers of continuum so in the book I talk about environmental, which has Paris climate accord mandates and mandates for the state or the city, and also for the nation itself. That's when AI is deployed and all its progeny in different aspects of society, they're all working to attain certain measures and symbiotic relationship with the environment. So that might be in optimizing transportation as smart grids, as sequestration in agriculture, our logistics, basically having the DNA the digital genetics, this is why it's evolutionary ethics. Having digital genetics that are focused on the environment in every progeny of AI helps us to win faster. And so it is a continuum, you're absolutely right. But the important thing is that states and communities and even individuals should be able to self determine their own relationship with AI. And that's why I put very simple frameworks in the book J in order for people to be able to start to engage in the conversation by setting principles of AI around AI itself what they want to see AI do. You know, such as one of my sovereignty for what does that mean. And so these are important conversations because what we're looking at J is a new life form. Some life form but a new life form. And because we're the kind of the main life forms on the planet, we should be involved in shaping this new life form particularly when it affects us personally shouldn't we. Yes. Interesting you call it a life form. I mean the suggestion of AI as a person, a life form is a person isn't it. I have my buddy you mentioned a buddy before that's my life form that I that helps me is my guardian angel so to speak right and I have to shape him in a way or her shape him in a way that helps me and my God you think about how wonderful would be to have a guardian angel. Who would help you would advise you and solve your problems. And so, you know, we're not there just yet. That's why I wrote the books in order for us to be able to innovate this type of guardian angel. And that's really important. So, yeah, it does a lot to the book in terms of in terms of what we can innovate and how we can innovate our future. There's a lot in there in terms of it being a live form. It's a new type of life form. It doesn't have consciousness. It will have self awareness eventually Jay. But it's kind of a, I don't know it is like this digital guardian philosophy that's out to do good for you and benefit for you, as opposed to actually being beholden to other agendas. And that's the problem we're seeing with AI is this being hidden agendas behind it in the bias of the algorithms, and also in the unethical training of AI because it's using an ethical data. So what we're doing we're returning back to ethics in order to be able to advance artificial intelligence together. I come up with two problems. Who specifies the ethics? I mean, you and I will agree on some things. We may not agree on everything. We may have a different view about an ethical rule, or about a constitutional rule. There are people in this country who don't believe in the US Constitution. So, you know, you have two camps on many issues and query who gets to decide what goes into the AI black box. And that person who decides will be extraordinarily powerful, maybe in subtle ways. Right. You're absolutely right. And these are really important questions Jay, because where do we start with putting some kind of Constitution or some kind of principles beyond Asimov's laws, which is what the book reveals into AI. And so this is why digital genetics are so important and why I went to evolution is because whilst you and I might agree on trust, maybe in a scenario or our own personal culture, it may differ. And allowing the genetics to cater for that difference means that you're being served well in trust and I'm being served well in trust. So it needs to be able to be have subtle nuances, because if we don't do that Jay, then we're just going to have a flat, boring set of ethics that will maybe destroy our cultures. And we don't become automatons, do we? But you pointed out something that I think is really, really, really critical about AI. And even in my primitive lesson on it, it's clear that AI learns. That's what AI is about, it learns. So if I make an ethical rule or I make some guardrails or constitutional provisions and I find that isn't working for some reason, that it's not fair to people, it's not fair to most people. I can make an analysis, me the AI, I can make an analysis and I can learn from what has happened and I can improve that. Furthermore, it's not just every year or five, it's every second. And so it's always learning. And so what you have is an ongoing search for perfection, if you will, and the dynamic of the human experience is all built into that. And I think people have to understand this, not only our buddy, when we wake up in the morning, it's our buddy all day long and it's always changing. Well, that's right. It's actually referred to as a busy child because it never stops. It never stops learning. Now, one of the things that's really important for the future of any state and any of the US itself is a very strong supercomputing high performance computer accessibility throughout states and also in the nation itself, because these train AI that busy child at lightning speed. Quantum computing is just starting out and it's still very early and it's still got a long way to go and they'll probably focus on things like cyber encryption and genetics, which is great. But we need a really strong supercomputing strategy within states and also across the US to make it accessible to the private sector to education to academia and others to be able to train these busy child and get them right. So they're learning on the right track and not potentially going off on a wrong track. Trust me that the education we need is not necessarily programming language. The education we need is to understand the capability of this and how to interface with it. And I can see I've been interested in your vision of this. I can see a black box, not very big, which is near us all the time, which deals by wireless to a supercomputer out there. The whole thing is connected and kind of a mesh technology. And it is always on. And I know enough about AI to know how it works and what it can do for me. So like my, my Alexa right here. I should ask Alexa what she thinks of AI. I think I think I will. So, you know, the point is that I talked to it. I know enough how to talk to it. And then it knows enough about me to engage with me to be my buddy. That's all I need. The important thing is the power of it, the ability to compare against large databases and the and the rules. Natural language processing, which is what you're referring to the human, the human voice interaction with a digital device and natural language processing is a is AI, a form of AI. What we're learning is that speaking to the technology world is actually much easier, most of the time than actually being on our phones right it actually puts us more at rest so voice interaction is just going to get bigger and bigger and bigger it's already quite huge at the moment. In terms of actually going to the start of your question Jay, the other types of education skill sets philosophy will play an important role in the future of AI. So will social science. We, we absolutely need more and stronger cultural leadership, because if we take the Constitution of the US, it will have different flavors based on particular types of cultures and diversities and even genders. So in the book I talk about, you know, different genders for AI, it may be male in one instance, it may be female in another instance, it may be a LGBTQ or they, it may be a new gender, Jay. And these philosophical questions are really important and that's why I think that philosophy will play an important role in shaping a future of AI. So what we're going to see Jay is a change in company culture. Corporate culture has got to change. If it's defined a beneficial partnership with AI. And so we're going to see business cultures change. You see, AI is not about us innovating it. It's actually encouraging us to become more mature in our human experience. So it's inviting us to evolve. We have to set it up that way. We have to set it up so that corporate culture must change. For example, if you, if you say that right now, you know, the bottom line is very important to corporate culture, at least in this country and in the multinationals. What is going to make those corporations change? Well, it's going to be efficiency. It's going to be the leverage of AI. It's actually going to be your bottom line will be better if you use this technology. However, if you use this technology, there are certain requirements we impose on you. And that is don't be so concerned about the bottom line. Well, culture determines the success or failure of any company. And so looking at the culture around the human side and the human influences and AI is a fundamental conversation. What we'll probably see AI doing is removing all middle management because there's very little creativity there. And AI can't isn't a creative force is actually a great logical force and decision making force. So what we're probably going to see is middle management replaced with AI at some stage. Now as we head up into the creatives part of the business into the kind of the SVP and the C suite. Well, there's a lot of creativity there. So I probably won't replace them. But we will probably see middle management. Yeah, well, middle management can go home and spend Andrew Yang's $1,000 a month, you know. I saved one really tough question for you, Matthew. We don't have a lot of time left, but but I do want to ask you about the, you know, the upper upper management on this. Somebody has to set this, you know, at the continuum of the tier continuum, you know, from the individual to the community to the bigger community and so forth. But at the top of it, somebody has to set the ultimate rules. For example, you know, if I say, well, here's some good rules and you can put this in your black box. You can actually control somebody who is nefarious, who doesn't follow those rules who makes his alternative black box with no rules or bad rules. Somebody has to have sanctions. Somebody has to say no, you can't do that. You have to come along with us and this, this person or entity has to be very powerful. The leverage of this person or entity is enormous in, I think, what you describe as World 3.0. How do you select these people? How do you, what sanctions power do you give them to make sure that what rules they set are followed? So book talks about how companies and organizations, whether local national or international can set up their own AI ethics kind of standards or guard whales to ensure only AI, ethical AI is deployed within the business and also serves their clients and it's the same for the United Nations Sustainable Development Goals. It's a quote, I mean, Ray, who's a great author of AI, he said, as more and more AI enters into society, Jay, more and more emotional intelligence is needed in leadership. And so I think what we'll start to see is AI as a companion to a CEO and a C suite actually advising and guiding the CEO and the C suite and even the shareholders maybe on decisions that best suit the business. The power, I think, will only be given to those that have the new emotional intelligence and the maturity to actually understand where our future is really going. Are you going to be in our future, Matthew? I hope so. Are you going to write some more about this? Are you going to try to participate in the, in the development, the evolution of this technology? Yeah, so I'm doing both. I've got the second book already planned and sketched out and this will be focusing on how do we bring culture and even, I don't think we can avoid spirituality some aspect Jay into this. What I'm doing at the moment is I'm working with a new national supercomputing company that can deploy supercomputers and AI around our US and have a clearinghouse, which I think is going to be great. There's some great conversations going on with computing giants at the moment. And also one of the boards I'm on in the UK Smarter.ai, they're about to launch something, a new platform that democratizes AI. So businesses, government and organizations can literally speak to an AI to create an AI. And that's going to be really exciting as well. There's no hands dirty in it, because I should take responsibility, but also I'm planning the next conversation as well. It's been wonderful to talk to you, Matthew. I would like to talk again because there's so much more we haven't covered and, and as everything else it's a moving target these days. So I'm going to, you can run but you can't hide Matthew. I'll find you and we'll do this again. Okay. Thank you so much for having me on the show. It's been great. And I'd love to have a chat another time. Thank you so much. Take care.