 It's the one o'clock clock on a given, what, Wednesday? And here in Think Tank, we're talking community matters and we're talking about Coded Bias, the movie with Matthew James Bailey, who joins us from Denver. Hi, Matthew. Hi, Jay. Great to be here. The same to have you. So let's talk about Coded Bias. So you wrote a book and this is right up your alley. The book you wrote, which we have discussed before, is called Inventing World 3.0, Evolutionary Ethics for Artificial Intelligence. And the book is remarkable in that it talks about how we got here. It's very important. You cannot understand the present or the future unless you understand the past. You must understand the evolution. And indeed we are in a transformational evolution for sure, you point that out in many ways. But here, we have things popping up on the landscape. This movie popped up on the landscape and won all kinds of awards. And it's like a home movie. There's a young woman, student at MIT, studying, I guess, computer science. And she finds that there's an inherent bias against blacks in artificial intelligence. Can you talk about the movie, Matthew? Sure, so the movie is the Coded Bias. And it's on Netflix, so people can watch it on Netflix today. And quite frankly, I think this is a movie that everybody should watch. It's very easy to understand how they explain a story of how Joy Buluwima, Buluwami, I think that's her name, an MIT genius, was looking at facial recognition software and realized that it's not very good to identify people of color. And there's this one moment in the movie, Jay, where the facial recognition can't see her face, but she puts a white mask on and the facial recognition software recognized her. So the Coded Bias is the beautiful story of her as she goes through this discovery of bias in artificial intelligence and how she ends up in Congress explaining the bias in artificial intelligence and how Congress then passed the law banning facial recognition in federal agencies. And it goes off into the UK showing racial bias used by the police in their facial recognition software. It goes into China and shows just how much facial recognition software is needed for you to get food, right? This social scoring. And it also branches into a beautiful story in Brooklyn where there's a group of residents that have to use facial recognition software to get into the building and also how the facial recognition software is being used to profile their activities within the building. It's quite fantastic. And it's a very educational and a video that will literally frighten people in a way to how much control there is artificial intelligence society. And there was one comment made within the movie that I really liked and that is this. We need to bake Western ideas into artificial intelligence and that's what the whole book reveals on how to do that. Yeah. Well, and this is perfectly in tune with your search for a transformational ethic. We live in a time where it's hard to get our arms around the new technologies. They run away from us. They run away with us. And this is an example of that. And just as you say, we may not be completely aware of how much AI plays in our lives and recognition and all the other things that flow from that. But in fact, nobody can escape, not in China and not in the US. I think the profound thing that it left on me was in China, all this is done by the government. In the US, it's done by Facebook and Amazon. Well, yeah, it's all done by big tech. And the European Union last week passed some regulation of artificial intelligence that literally changes the entire global landscape of artificial intelligence. Where AI, buoyed within European borders within two years has to classify it whether it's high risk, medium risk or low risk. Every single artificial intelligence has to be fine. And that changes the entire game. Big tech now has to become transparent. And the US has literally just fallen behind in the AI race to make it ethically aligned with society that honor the cultures, the beliefs and the values of society. AI to be a good digital citizen and not a citizen that influences our election. Yeah, we should go to that. But one comment about Europe and the EU and all strikes me that they're so far ahead of us in appreciating these issues and dealing with the business side of it, the ethical side of it, the social side of it. And you've got to give them credit for coming up with this. It's very courageous. And obviously democracy came out of grief. I think we can probably agree on that, which is in the European Union. And so it's not surprising. Now, the US itself, there are some good things happening in the USJ. Washington State have done something really incredible. They've passed for the Senate and it's going to the house now where everybody within the Washington State, they get user agency of their data, they get control of their data again, right? Which is way beyond what the federal government have done. And also every single AI that's involved in facial recognition has to pass a digital citizen test to ensure its quality to do well for all the diversity within society. And the book talks about this too. And obviously California with their data regulation laws, but I haven't seen anything where they've enforced it against Facebook or enforced it against Google or enforced against big tech. So the US literally has just dropped behind in the alignment of artificial intelligence as a beneficial citizen within society. Well, a year ago, maybe more, Congress seemed to wake up on the point and they had some hearings and testimonies from the big tech guys, the CEOs, and then, and if you watched it, and it was on C-SPAN and the media, if you watched it, you realized that the people who were asking the questions, that the sitting legislators in the Congress, they didn't have a clue that they were uninitiated. They didn't really know what to ask and they weren't getting any answers. Mark Zuckerberg was leading them around on a pretty chase. But the fact is they didn't know what to ask and they didn't get any meaningful information. A year goes by and they restarted those hearings with, I think, pretty much the same result. They had a year to get up to speed and I'm not sure that helped. And I'm not sure they're doing anything right now. I don't think Congress is doing anything about anything right now, except they did one good, one good thing today. I forget what it was. They did one good thing today, gotta get prepared. But by and large, Congress is not up to the task and I'm not sure when it's gonna be up to the task given the divisiveness. But what, go ahead. This is why innovation is returning back to the region or to the state level in the United States. That's why Washington State's doing that thing. Now, why it could do something, right? That's really exciting. Portland and Oakland have abandoned the racial recognition software in spite of the police force and federal agencies. So what I suspect is that the state and the region, Jay, will innovate much faster than federal government. And we need to look at the efficiency of Congress and the House on determining these issues. The quality of the mindset and the maturity in the debates from elected officials. The quality of whether they want America to do well and the societies in America or whether they just want big tech to do well. And so the problem with the U.S. at the moment is that it's looking to big tech to lead its future. And that isn't gonna work. It need to take a holistic approach where yes, we need to look at the military aspect of AI. That's important. But what's the point in protecting the U.S. borders AI is destroying society and the fabric of society. What are you protecting? So we need to take more of a mature and a holistic view where yes, military is important, but also we want the societies and the cultures within America to do really well as well. And that's important. And that I think is not seen as important within these debates. We feel quite terrible, really. Yeah. Well, and then going back to the, you know, the advance, if you will, in Europe, it strikes me that you had all these companies are global, you know, all the social media companies are global and they're powerful and they're worth billions and billions and billions and a lot of muscle there. So if you say in Europe, you have to, you know, you have to characterize the risk involved and we're gonna, you know, try to slow you down and bring some ethical overlay to all of this, which is really a good idea. That's gonna affect what, say, Facebook does in Europe, but query, Matthew, is that gonna affect what Facebook does in the United States or elsewhere? If I limit Facebook's activities in Europe, does that have a global effect on Facebook's activities elsewhere? Well, it may do. We may see Facebook being broken down and there has been conversations the same with Google, right? We may see broken down into separate businesses, where in Europe, it's giving an ethical experience without echo chambers and without control from advertising. And by the way, you know this, the facial recognition stuff you do on your phone, right? You hold up your phone and it basically says it's you. That's considered high risk in the European Union now. So what it means is all AI products and services will have to show whether they're high risk or low risk. So what I suspect is the social media experience in the European Union will be very different in two years' time than it is in America. Unless America decides to do something quite brilliant and enter into a mature and evolutionary mindset around AI and bring it as a purposeful technology to advance the whole of America and the whole of its societies. And if it does that, then it should be okay. But I don't see it yet. Oh, I don't see it either. One very interesting story that popped up in connection with the movie was his fellow who, his phone was broken, but he wanted to look at his mail. So he took his laptop to a parking lot outside, just see this outside a particular store that was giving away wireless. And he opened the top of the laptop and it came up with an error code. It said, you can't connect with these programs because your camera is an on. I said, what? What all of a sudden, it appears to him, it dawns on him that the programs he was looking at on his cell phone were turning his camera on. And because the laptop didn't have the camera on, it was showing him an error code. I find that fantastic. And this is, you talk about risk. People don't know this. They don't know how they're being pictured. Taking a picture of when they don't, nobody tells them, this fellow found out. And that's high risk. It is high risk. And that's why Cody Byers is an important discussion for the general public as a platform to understand where the issues are. So there's a great story in the film, isn't there? Of a teacher that gets all these awards in Houston, Texas, but AI has profiled him as a bad teacher. Yet, outside of technology, he's been recommended for this, he's winning awards for this, he's the best teacher of the month for this. And it just shows the disconnect between the big tech and what's really going on in the real world and how well people are really doing. And it shows a lack of nurturing. It shows a brittleness. It shows remote control. And it's not cool, it's not good at all. So that was a great story, I thought, about the teacher. And he actually went to federal court, isn't he? And there's also going on around this and say, tell us how did the AI algorithm come to this decision when I've got all these awards? Well, let's step back a little and talk about how AI works. Because, as you said at the outset, this movie actually helps you understand what it does and what it can do. And for example, why it's got implicit bias when it was not programmed to have implicit bias. It was not programmed very well in terms of excluding implicit bias, but nobody intended that it should be bias. It's just that the way AI works was by the, and what I mean by that, my perception of it, and I'm sure you have many more thoughts about it, is that it's like AI is fundamentally going to compare the subject at hand with millions, tens of millions of other records and make some conclusions. So for example, in the case of the African American, if AI looks through those tens of millions of records and finds that a large percentage of blacks are involved in crime, for example, or some violation of something, then it's going to rule against them and it's going to give them a low score. And even if they're good people, even if everything else points to a good person, that the comparison against some kind of demographic they look at is not positive. So the fellow who's using AI to hire, the fellow who's using AI to give a loan or sell a house, all that, it's a negative score. And it's not based on the individual. It's based on certain characteristics that go to a larger demographic, which may or may not be accurate, but that's the data. Bad data is the big weakness in AI. And maybe the data was right, but that doesn't have an ethical overlay. It doesn't have the American morality. It has a sad state of affairs in the United States is being perpetuated. So if I say there's a lot of people who are charged with crimes that happen to be black and I feed that into the system, then a lot of black people can't buy houses, can't get loans, can't get into schools and on and on and on. And it perpetuates their status as a underdog. And we've been doing this for a while. It's not just happening right now. And that's really a shocking part of the explanation of AI and how it creates bias, even without intending to do that. Am I right? What would you add to what I've said? Yeah, I would agree with you. Remember, there was a snippet of Steve Wozniak and there was a story where him and his wife have the same bank account, same everything. He applied for, I think it was a loan or a credit card and he got 10 times more credit than his wife and yet they hold the same asset. And there was an example about the AWS recruiting system, didn't it? Where it was biased against women and it was biased for men. And the European Union regulations called this out saying, no, no, no, no, no. You're not gonna have bias in AI and recruitment. You're not gonna have this bias anymore in these algorithms. So yes, the bias is proven to be in there. So the quality of the data and the quality of the algorithms combined determine the quality of the outcome. And unless we have ethics and an ethical intent with the data, that means there's transparency, there's government, there's user agency of data. So, unless we have ethics in the data and when we have ethics in the data and people involved in the conversation then we have more data sets because we're developing trust with the general public. More data sets means we can get AI better and more accurate in its decision making in the end result. That's really important. So yes, the algorithms need to be looked at but also it's the whole agency of the data ownership from the individual. This is what I say, Jay. Data is an extension of your human sovereignty and therefore as part of your human sovereignty you should be able to own, manage and have absolute transparency how your data is being used and what it's being used for. What do you say to the person who says to you, Matthew, I don't care. My life quote is an open book and quote they can know everything about me. I don't care if Google or Facebook or Amazon or Apple sells all my data. I don't care because my life is an open. What do you say to that person? Well, who gets to decide? So, is it big tech and government or is it the people? If I remember rightly, doesn't it start off with the people in the constitution? So, the destiny of the data should and how it's being used should come down to the people and the people's decision. That's why I think, and this is going to be really dangerous for people listening, we need a national referendum around user agency of data and a national referendum around artificial intelligence itself. We need to bring the public into the discussion and the decision making. So, to someone that says that, that's fine, but who makes a decision for the country? Who's deciding the future of data and artificial intelligence for the United States of America? Is it the people? That's what this movie shakes you up about because you cannot care whether your data is used, but in a demographic sense, in terms of the policies, social and business policies, governmental policies, it's, you do care. You do care because it comes right back around it and it affects you. But I'm going to ask you a hard question, Matthew. So, suppose you have this initiative at the governmental level. How exactly do you bring government in? I doubt the EU has gotten to this point. How do you make an algorithm? And we use the word lightly, and it sounds very high-tech, but an algorithm could be a skyscraper worth of code. An algorithm is a huge, huge thing. And how do you have transparency about that? How do you know what it does or doesn't do? Can you have the public go in and look and will they understand? And if they want to change it, who decides about the change? Yes, so that's a great question. So, all algorithms are a mathematics, right? And when you have self-learning aspects of artificial intelligence, and there's mathematics governing itself, learn, okay? So, it's all mathematics, really. Now, there are aspects, and this is why something called explainable AI is so important. Some of these algorithms, companies don't know how they're making decisions or why they're making decisions because they're teaching themselves. So, how do we get the general public to endorse maybe an algorithm? Well, it's very simple. We have a set of principles of what the algorithm must do and what it's for and who's it for and what are the benefits and how do we catch it if it goes wrong? So, there's a set of, and this is why I called it digital citizen test, is that there are a set of principles I believe we can put in place when short behavior of the algorithm. Now, that's the status today, Jay, what we call black box, right? It's like input stimuli, see what the black box does and see what the outputs are and see whether it does what it is. Where we need to go next is putting ethics, ethical mathematics into the algorithm itself. So, it's naturally ethical in all its different aspects of self-learning and all its different aspects of delivery of services within society because it has ethical mathematics. Now, this isn't the advancement in artificial intelligence mathematics and it's an opportunity and I think we're going to have to invent this, Jay. So, those... It would be overarching. It would say no matter what this algorithm might otherwise do, we are going to say, it takes me to the movie 2000 Space Odyssey. You know, I can't do that, Dave. I have an overriding constraint here. It's built into my DNA. My algorithm is controlled. So, I can't do that, Dave. And somebody got to figure out, A, what those rules really are. It's an establishment of the rules of our society, our morality, our ethics and impose them on whatever the algorithm might do. Yeah. So, ethics is the framework that's being used around the world now to try and align the diversity of humanity and the morality that we are in humanity with artificial intelligence and try to bring them together in a common singularity of meaning and purpose. So, AI ethics is literally about bringing the algorithms in line with societal values, right? And I'm saying societal value is not the exact values, right? That's what ethics are all about. Now, ethics will vary based on a nation's culture, maybe a spiritual culture or a belief culture or religious culture or your family culture or the personal culture or your organizational culture. It needs a diversity in its ethics in order to be able to perfectly serve people well in society. And if we don't do this, Jay, the public will reject artificial intelligence. Well, and maybe it should. It should because artificial intelligence is so powerful that it changes our lives. You know, one of the things, you know, I've always spelled that you go into a different firm and I'm sure you'll agree with this. And you say, you know, guys are using yellow pads. We have to get you up to stuff here. We have to, you know, show you about computers. We're going to computerize the firm. And this happens all the time. And in the process of computerizing the firm, you change everything. The firm is completely different at the end of the process. No yellow pads, but all the rules have to be rethought. So I think what you're saying is rewriting the algorithm, putting these restraints on the, you know, imposing ethical standards, moral standards, social, religious, cultural stamp, putting it all in there. That is rewriting not only AI, but our entire society. Am I right? Yes, absolutely. On the money. That's a great example. Thank you. That's right. That's the right, right? Now, the question is, whether, you know, half the people have yellow pads and the others have all the computing systems, you know, we want everybody to come up together, Jay, in this new, if you like, digitized society. But unless ethic and morals are put into artificial intelligence and the algorithms or at least the way they operate, then we're moving towards separation of what humanity is really about. And artificial intelligence is being controlled by six to nine companies in the world based on their values and based on their view that profit is before people and planet. You know, one of the things, and I, we only have a few minutes left, but maybe we can at least touch on it and then come back in another show, but this is so huge. This movie opened my eyes, you know, why, really why that is coded by us to show me that it was not just that they were, that these organizations, both in, you know, in authoritarian countries and where the government does it and in America where, you know, the company, the corporations do it, that they have a profile of us. They gather this information, they exchange it, they sell it, they build huge storehouses of data about all of us independently and maybe even collectively. And they know who we are, even if we don't care that they know, they know everything. And with mathematical formulae, those algorithms, they can predict what we want. That's why when you go on a given website, you see something you've been looking on, looking at on another website because you have demonstrated an interest in that product or service. And so it's all over your profile and somebody else is buying your profile and now they're advertising something you expressed interest in elsewhere. Okay, that's a small example of it, but they know who you are, they know your interests, they know your personality and they can predict what you're gonna do. They can predict your purchasing, they can predict your habits, they can predict your travel, they can predict your job performance. Everything can be predicted. It's like that movie with Tom Cruise, pre-crime. They can predict if you're gonna commit crime using AI. Well, it gets worse and I'll stop in a minute. It gets worse because they can predict how you're gonna vote. They can predict what your sensibilities are about voting and this is the worst part of all, they can predict your vulnerability. Your vulnerability to being lied to, to being manipulated by social media and the like. This is exactly what has happened with the internet research agency in Moscow with Cambridge analytics, Analytica, whatever, in London, with Vladimir Putin trying to affect public opinion in our elections. And there's no sign that that's stopping. There's no sign that that's even subsiding. And that is the scariest part of all. Am I right? That's absolutely right. I can't add anything to other than it is clear that Facebook is being used allegedly to influence elections in the foot of the day leading to the European Union and also in the US. And so it may well be caught in the middle of all this, which is probably the case, but it needs to do better and put guard rails. But you're absolutely right, Joe. That's a great analysis. Well, I'm concerned that it's gonna happen again. And he who uses, or she who uses AI for these vast of the purposes to determine our tastes and vulnerabilities, our weaknesses and then turn right around and get into those vulnerabilities and weaknesses and try to change our opinion, not just me and you, but millions of us. At the end of the day, there's a really good example of this in the movie, it was Facebook, they changed some leaflet they were sending out some message they were sending out to add some thumbnail photographs. And that resulted in a 300,000 person switch just by adding thumbnails on the bottom. You remember that part that was extraordinary. And so it's not just you and me, it's the population in general can be affected. Will you talk about it? Yes, this is actually a form of social scoring actually at the manipulative level, very subtle. At least the Chinese are very honest, right? We're gonna take your data and that's the way it is. Okay. In America, it's very different. And so what you're talking about is that you see a picture and you say, well, all my friends were really close to me that Facebook knows they all voted this way. And so it's like, well, I'll probably vote that way now because they're my friends and I trust them, we have great conversations. I know they're kids, they know mine. So it's subconscious manipulative. And so the European Union talk about this, there will be no more conscious manipulation within social media, which will change social media in the European Union completely two years time. So yes, Jay, these are the subtle things. And this was a Facebook project, Jay. This wasn't a Cambridge Analytica. This was Facebook doing a project. Now, this is not ethical. It's not good for the citizens. It's not transparent. It's not mature. And it's not the leadership of the future for the United States of America. Yeah, that's really scary, hard to control. And I'm not sure that Congress understands what we're talking about or that would take affirmative steps or that if you explain it to state legislators around the country, whether they would know what to do with it and how to follow it. And finally, and this is a good point to ask you for a final question is if we find a way to stop them, isn't it so just like hacking, isn't it so that we're in a spiral and they will find a way around us? And so we can close one window, they're coming in the other. Isn't that what would happen in the ordinary course? That's one potential reality. The other way to do it is innovate past them. Simple as that. So we're seeing this with Patreon. What you do is you make them irrelevant. You create ethical forms of social from the, you make it transparent. You give people user agency of data. You're clear in how data is used and what algorithms are in play. You're clear with a new ethical transparency. You just simply innovate past them. And this is a huge opportunity to make Facebook irrelevant actually. Now what's suspect is if Facebook truly want to become a meaningful experience going into our future over the next decade, they'll have to reinvent themselves because what they're providing at the moment is being a good measure for how we're not doing well on social media and how it needs to improve. So how Facebook decides to change and pivot and transform to become aligned with the values in society, aligned with the constitution, honouring diversity and equity. That will show how good this company really is. And if it doesn't, then I believe it will become irrelevant in the future. I hope so, but does that come from public concern as it come from governmental action? And when I ask you that question, I'm thinking of Fox News. And for that matter, the New York Post in the last few days have published and broadcast stories that are absolutely untrue. Straight face stories that are lies. It's not the first time, obviously, it's not the first time, but they would claim that if they had some belief that it was true, even though they really didn't, that would fall into the First Amendment and we can't stop them with that. And so it's hard to stop AI or social media from telling lies because what is a lie? Somebody has got to decide what is a lie. Can you rely on AI to decide what is a lie? I think we will do eventually. I think AI will become, as we talked about, our personal, digital body or guardian or digital angel that will protect us from all the manipulations that are going on. I think this will happen. And by the way, AI is being used to write articles in mainstream media, as you're aware. I think what we have to return to, Jay, is critical thinking. And if we return to critical thinking and we start to empower ourselves to assess what is the truth and what isn't the truth for us. Well, I want to assure everybody that inventing world 3.0, evolutionary ethics for artificial intelligence by none other than Matthew James Bailey was written by a human being person. And it was a gift to me by Mike Fitzgerald also in Denver. And I really appreciate that. It has opened my eyes and so have you, Matthew. And I hope we can do this again and drill down some more because there's one thing for sure. We're going to hear more about these issues. That sounds great. Thanks very much, Jay. I appreciate it. Thank you. Aloha. Thank you, Matthew. Aloha.