 We are very fortunate today to have Phaedra Boinedaris joining us for our closing keynote. She is currently pursuing her PhD in AI and ethics. That is from a generous scholarship from the European Union in collaboration with NYU. She has done so much work in digital inclusion programs. She founded womengamers.com. She launched the first national scholarship program for young women to pursue degrees in game design and development. She's one of the United Nations Women of Influence in STEM and Inclusivity Award. She's received the Social Innovator Award by IBM and was a 2014 Kenan Flagler Young Alumni Award. She is a fellow of the American Democracy Institute in 2011 and was in 2007 was recognized by Women in Games International as being one of the top 100 women in the games industry. She has written a book called Serious Games for Business and is a regular public speaker and contributor to articles in Forbes, Fast Company, the National Academy of Engineering Journal, NPR, and other publications. If you as a student are fortunate enough to go to NC State University, she is an adjunct professor there. She does serve on Governor Cooper's COVID Task Force on Education as a technology advisor to the State of Washington's CIO office and on the boards of Marble's Museum, First Robotics, and the NC State University Exec Ed program. She graduated from UNC Chapel Hill. Thank you, Phaedra, for being here and take it away. The pleasure is mine. Thank you so much, Charlotte, and thank you, everybody, for coming today. Let me just share my screen if I may. Charlotte, could you give me permission, please? And while she's doing that, I just want to say a quick hello to my kids, Athena and Xander. Hi. You know, as a mother, I'm going to take every opportunity I possibly can to embarrass them because it's my God-given right. All right, here we go. So this is actually one of my favorite topics. So I'm truly honored to get to speak to you about the subject of Responsible AI today. Indeed, this is a subject that is very near and dear to my heart. And you are so lucky to actually have an entire summit focused on ethics and leadership. I cannot underscore this enough. I had a magic wand. I would wave it so that high school kids around the world would be able to have summits like what you're having and experiencing today. So the purpose of my talk is not to terrify you, first of all, not to terrify you whatsoever, but really to enlighten you and more importantly to galvanize you, to make you highly cognizant of what this space is, right? So and know that everything that I'm going to be describing to you today is absolutely fixable. That's the entire latter half of my presentation today. So no, I don't want you to leave feeling pessimistic about a dystopian future, right? I want you to leave galvanized and wanting to really sink your teeth into this space and really advocate for it. So a little bit about me and how I got into this. My background is that I was really lucky to grow up in a family of techno files. Both of my parents immigrated to the United States from Greece. They went to college at the University of Florida in Gainesville, both studying electrical engineering, which is where they met. And my sister and I, again, grew up always being encouraged to play around with computers. And when I say play around, I mean like taking them apart, putting them back together. And of course, we gravitated towards playing games and eventually designing our own games for each other, for our friends, for the neighborhood kids. So that's sort of the environment we grew up in. My sister and I eventually started our careers in the maths and sciences. I got a degree in math and computer science and my sister in design and engineering. And again, we graduated at a time where there were no such things as game design and development degree programs as there are now. But we, again, maintained our passion about video games. And so we started a company called womengamers.com and ran that for about 15 years, which is a really fun experience. Then I ended up going back to business school because I saw this big shift in the games industry. And I said, OK, I want to learn everything I can with respect to going after venture capital money, possibly starting my own studio. And while I was there at UNC Chapel Hill, my alma mater, while I was there, I threw myself into these things called case competitions. This is when a company will go to a business school with a real life challenge and the students have overnight to read the challenge, come up with an idea and then they have to pitch it to a bunch of judges the next day. I thought these were super fun. I really like them. I was on my sixth one and IBM came to the school and said, I've got this challenge, this business process management software. And I'm looking for innovative ways of explaining it to non-technical people. I have no idea what the heck business process management was. But they handed us a stack of papers about this day. And I'm reading it and I'm thinking, this is a strategy game. I can tweak different business rules, seeing how it affects my broader ecosystem. I could have competing models. I was thinking kind of like a SimCity style game, God view, right? And I pitched this idea and one of the judges was a vice president for strategy at IBM. And she pulled me aside and said, I'm funding this right now. Can you make this game for me in three months? So that was sort of my segue into the world of IBM, just recognizing that games, games when well designed can actually be really adept at explaining complex systems. So initially started to make games to explain a lot of the complex systems, which IBM builds, even went into things like disaster preparedness and disaster response games for the US Department of Defense. And that's really where I started to really recognize the power of artificial intelligence. Just this idea that with something, a technology like AI, I can learn as much as I want to do, as much as I could about a person, about their environment. So ultimately I could curate an experience that is highly, highly customized for them. So that's in essence what I ended up doing. And it did a lot of interesting work with high schools around the country and around the world, making things like smarter toys and medical Minecraft. And so then something happened in 2018, which filled me with such disgust, such horror, such anger, such dismay that it really gave me a pause. It really gave me a pause and made me decide if I wanted to learn as much as I possibly could about the realm of AI and ethics in that event, that quintessential event is the news about Cambridge Analytica. And if you're unfamiliar with Cambridge Analytica and what happened in 2018, this is part of your homework assignment today. But it's something that I really want you to look up and learn about. So I was angry and I said, OK, I want to learn about Responsible AI and what can be done to help mitigate the risks of something like this ever happening again. And the more than I've been studying about this space, the more that I'm recognizing that I call it the good, the bad and the ugly, right? I had for so long been, you know, invested in artificial intelligence for good, tech for good. And here is this example of the bad with Cambridge Analytica and its blatant malintent. And then there's the ugly. And the ugly is actually the space that is actually most prolific, right? And that is when organizations that have the very best of intentions, the very best of intentions due to biased data that is being used to train their algorithms, they can actually cause quite a bit of harm. And that's that's really where I see, again, the most prolific cases today. And what I wanted to talk about with you so today, artificial intelligence is being used to solve all kinds of problems, to make all kinds of high six decisions that directly affect people, right? Whether you get into that college that you applied for, whether you get that job that you wanted, what's the percentage interest rate you get on your loan, whether you get that credit application or not, whether you get sentenced or not to prison, right? I mean, there's all kinds of ways in which AI is helping to to make major decisions about you. And again, the intent may be honorable, but again, due to bias, there's concerns that that bias could get systemically calcified. And there's two major issues with this, right? One is two major things that keep me up at night. One is that, first of all, people don't even know oftentimes that an AI is making a decision about them. It's not like there's any, you know, overarching regulation or protection that says, yep, they've got to tell you when this happens, number one. But number two is that for whatever reason, people think that it's magic, that an AI is magic, that because a decision is coming out of an AI, that somehow it couldn't possibly be biased against them because it's not a human, that it's morally or ethically squeaky clean. And again, you know, as you now know, that couldn't be farther from the truth because it's people, people who have bias, right? That make the decisions of what data to choose in order to train the AI, right? All right, so I'm going to give you some very salient examples that have certainly resonated a lot with me. Starting first with precision medicine. I mean, there's a lot of excitement about this idea of, you know, training smart agents, training in artificial intelligence to learn as much as they can, as much as they can about a person, about a person's DNA and health history, etc., etc., in order to come up with a custom regimen for them. Right? Here's how many pills you should take. Here's what you should do in terms of your your lifestyle and we're going to become healthy. Now, in this country, historically, what we have found is this is again, again, one data set, there's a zillions of them. One data set is that African Americans are routinely prescribed fewer pain meds than white Americans. Now, you know, the silver lining is that for the most part, it got them out of being part of the major opioid crisis. But this is actually due to systemic racism. There's no other way to quantify this or couch this. So if data like this coming from our medical institutions, coming from our doctors, historical data like this is being used to chain an AI, right? For things like precision medicine, that same systemic bias gets calcified. Right? Here's another example. Amazon had been using or had built an artificial intelligence that would be used to recruit their next generation of employees, right? And the way that they trained this AI was by using historical data sets. Like historically at Amazon, what is the the kind of person, right? Their skill sets, their background, etc. that where they were successful and try to find more people like that. Now, note, they stated they stripped out things like gender and race from the data set. They stripped it out because they were they were like, you know, we don't obviously we don't want to choose based on somebody's gender or somebody's race or somebody's ethnicity, right? We basically want their skills and competencies and backgrounds that have historically matched what's been successful before. But here's here's the challenge with these things called hidden bias. And this is a really important point that hidden bias is that hidden bias exists in the rest of the data. For example, like even though gender and race were stripped out of the data set, if your resume had in there that you led a Girl Scout troop because the historic resumes that had been used to train that algorithm didn't have those words in there, that resume got ranked lower. That's what I mean by hidden bias, right? It's beaked in to the rest of the data set. Another example of bias in data. This was used in there was an application that had been built called Compass for the judicial system where it would in essence give a recidivism score for prisoners in the courtroom, meaning what are the chances that this person who's sitting in front of me in my courtroom today is going to commit another crime, right? You put in the data, it goes and it looks it up and the algorithm makes some kind of a determination. So the publication ProPublica did an expose about this application and said, guys, you can't use this in the courtroom because you're using historically racist data sets in order to inform the same algorithm. The decisions coming out of this algorithm are promoting bias, right? So as soon as this came out, this expose came out, the application was yanked. Now, there's some wonderful tools out there and the wonderful publications that really explain some of the deeper concepts that I'm describing to you here, especially things around hidden bias and fairness, right? MIT Technology Review is a fantastic publication and they have this great article specifically about the ProPublica Exposé of the Judicial System app talking about fairness. And it's really important if you're interested in this space to dig into how algorithms approach fairness because fairness is actually a mathematical algorithm, right? Because what's fair to me may not be fair to you. I mean, just look at the concepts of equality versus equity, right? Now imagine it even more granular because that's how granular you can get when you're configuring an algorithm. I'm going to give you another example. And again, it's not just bias and artificial intelligence that we need to be concerned about. It's bias in data, whether you're using an AI or not, right? So here's a great example about bias and data that may not necessarily be directly social, not having to do with race or ethnicity, et cetera, but they're still bias. So here, the city of Boston created a smartphone app you would install on your smartphone, whereas soon as you drove over a pothole, the app would send the geolocation coordinates back to the city to go send a fleet of crews to come and fix the pothole, right? So you might think to yourself, OK, where's the racism? Where's the sex? And I don't see it here, right? Now, again, this is the challenge with this approach isn't necessarily inherent bias in data that has been used to train an AI, but inherent bias in the process. So I wanted to give you this example. So Harvard Business Review actually did a review of how this app was working out. And in hindsight, it seems really, really clear. But in essence, what happened was, who do you think had the app on their smartphone? Whose potholes in whose neighborhoods got fixed first and whose didn't get attention? It was the affluent, the affluent who had the smartphones who had the app on their smartphones, who had their potholes fixed first and those who were on a lower socioeconomic status who did not get their potholes fixed who were in the disadvantaged neighborhoods, right? So again, this is an example of, you know, I want you to think in the early design for an application like this, what could have been done in order to make sure that these unintended consequences didn't come to fruition, right? Here's an example from the realm of coronavirus. And I know, you know, artificial intelligence has been used significantly with respect to things like contact tracing. And I know that, you know, China certainly has been in the news quite a bit in that regard, but know also that in many countries around the world, actually data privacy laws are currently being changed in order to advocate for the gathering of data, the gathering of citizens data, where the citizens data privacy, you know, who they are, where they're living, any other kind of data vector information is being shared with groups, including the police, highly personalized, personally identifiable information. So again, it's interesting to watch these major macro trends happening around the world. Now one major concern with respect to artificial intelligence is that indeed, it can be used to manipulate how you see the world. And I don't know if you've seen the Netflix documentary, the social dilemma, it's kind of like more like a docu-drama, but it really emphasizes this point, right? That it can be used to manipulate how you see the world. And YouTube's algorithm was indeed an excellent example. How is that algorithm being optimized? So with respect to a few tools that I wanted to make sure was on your radar, Google Pear has come up with some very interesting tools that explain some of the stickier ideas that I've mentioned to you. One example is how do you measure accuracy with respect to fairness, hidden bias? This is another one I was telling you about. You may come across organizations that say things like, we stripped race out of the dataset, we stripped out gender out of the dataset. Again, just remember some of those examples that I tell you because every time you hear something like that, like the hairs in the back of your neck should stand up. And the first thing you should think about are things like hidden bias. This is a fantastic resource for me in the future of Privacy Forum that has in essence categorized the various ways in which automated decision-making can cause harm, both on an individual basis as well as a collective societal basis. So these kinds of categories can come in incredibly useful as you're thinking about different approaches towards artificial intelligence. Now, when I think about what to do, right? What do I do with this kind of information? The first thing that comes to mind is I've gotta educate. Educate, educate. The more people that know about the risks of this slide, the more people that know about this, right? The more that we can galvanize citizenry to demand explainability, to demand transparency and AI, to demand trust in artificial intelligence, right? And which is why I am so, again, so very pleased that North Carolina School in Science and Math is having this session like this to teach people. I mean, I can't underscore enough how important it is, right? So I had been racking my brain on how do I introduce some of these concepts, especially things like hidden bias, right, to kids? How do I do it? So I mentioned as part of my bio, I'm on the advisory board for the Marbles Kids Museum and every year they do a kids' code event. And I reached out to the Raleigh Charter High School's robotics team and together we created an AI-powered Harry Potter sorting hat, right? So just like from the books in the movies, you put the hat on your head and you say something about yourself, something unique to you. And the way that the AI works is it picks up as it has a natural language processor in there and it will go and it'll categorize you into a Hogwarts house. So I rigged the hat, I rigged the hat because I wanted to teach something about AI and ethics. I rigged it such that if any of my kids were to put on the hat, it would immediately put them into Slytherin, which is exactly what happened. I mean, my daughter who's now 16, she was 14 at the time, she puts the hat on her head and she says something unique about herself, right? And again, I pre-programmed it so I guessed what she would say. And immediately, the hats, the lips are moving, I guess you'd call it lips, Slytherin, like from the voice from the movie. And she crosses her arms and she glirs at me and she says, mom, you rigged the hat. And I said, let this be a lesson to you, never trust an AI that's not fully explainable and transparent. You should be able to ask the hat, what data sets did you use in order to put me in this Slytherin house? What was the algorithm? What was your approach, right? And there should be some kind of a feedback loop. So if you've got concerns that you were putting the wrong house, that somehow the AI would know, you'd be able to have a conversation where potentially those designers, those developers could tweak the algorithm. So again, a clever way of using pop culture is it means that to get the basic idea across. Now, I'm gonna, to switch gears here a bit with you and I wanna talk to you about what do other people, especially in positions of power in companies, what are their points of view with respect to advancing AI and ethics? Because there were some things that came out in this report, we surveyed over, I get over a thousand executives worldwide. This again, this was pre COVID. So a lot of the numbers in here have now ratcheted way up post COVID. And I'll explain what I mean. So really surprising statistics that have come out of this. The first obviously is that AI is gonna have a major impact on skill sets and on the workforce. We're on the cusp of a technology tsunami in the form of robotics, AI and automation. And again, this was back in 2018, more than 120 million workers in the largest 12 economies may need to be retrained or reskilled in the next three years. Now again, mind you, now this has ratcheted up in terms of how quickly this need is now happening, this need for reskilling because AI, automation and robotics due to COVID has sped up, right? Now, the next point that I thought was really interesting is when they surveyed chief human resource officers, over 60% of them said, we don't need to retrain our people. It's not our responsibility. We either have no or a minimum obligation to offer that kind of retraining, right? Which I think is very interesting. I want you to keep that in mind. And then with respect to ethics, over half of the CEOs, oh actually barely half thought this was a CEO level issue that they needed to even be worried about. And then when they were pressed on who would lead your AI ethics board or who would be in charge of AI ethics for your organization, they would pick either the CTO or the CIO. They didn't even think like that it would be potentially be a non-technical role. Now for me, when I read this, I was like, this is so much like the fox watching the hen house. Why would you pick your CTO who's responsible for designing and developing things like artificial intelligence to actually lead the AI ethics effort? So this leads me to now the part of the presentation where I explain what can be done to stop this. What can be done? How do you set up security guardrails in order to mitigate the risks of bias in this way? So I think of it as being a three-pillar approach, right? This is not something that technology alone can save. You can't just say, here's a piece of software. Now, this will go and fix this major cultural issue for you and by the way, all the isms, the sexism, the racism, the blah, blah, blah with hidden bias, right? Culture, forensic tech governance. In culture alone, there's a ton to unpack here including things like teaching cognitive bias in organizations, but then making sure you incorporate as part of that teaching work product and what it means when you're choosing data sets that that training that you had a cognitive bias on how you're supposed to treat others, you actually have to use that same training when you're deciding what kind of data you're gonna use to teach your algorithm, right? Incorporating ethics into design thinking. So you can begin thinking like, what are the unintended consequences? Remember that city of Boston example? What are the unintended consequences of this approach? Who am I missing in my target audience? Could by not going after them, could I hurt them in some way, right? These are things to be incorporating. Red team versus blue team. These are really popular concepts in the world of cybersecurity, you know, where you have like a white hat, trying not to hack into a system in order to make it smarter. Red team versus blue team, similar concept but shifted into the world of ethics where you can actually form a team to stress test the assumptions being made by the original design and development team. Like, well, have you thought about this? And what about this group over here? Because they have these kinds of constraints and situations, some kind of a group that has the mandate in order to really push against what the base assumptions are of the design and development team. I mentioned the feedback loops and sharing. First of all, those design and development teams are diverse, are inclusive. Again, lots to impact here about culture. And think of this as the entire wrapper around all three of these bubbles is diversity, equity, and inclusion. Forensic technology. This is when you might use a piece of software, a subscription to a software to mine the data in order to flag for bias or indeed generate fake data to stress test an algorithm. Well, before it goes out to market. And these kinds of applications actually generate labels. And the big, one of the things that bugs me today is I want these labels to be really easily understood so that the late person can just look at it and say, okay, I can see it fits these kinds of standards. I can take this loan out from this bank, right? The last one is governance. What's the published policy that that organization is promising to the world and to its employees with respect to bias standards, right? So again, I mentioned culture is a biggie and I wanted to show you this slide. This was an epiphany actually I recently had. I have no data to prove what is on this slide, but I'm noticing this as a major pattern the more and more I'm in this space, right? And it's basically above the waterline is what is being discussed in this world with respect to responsible AI and bias and data. So that's like, God, I'm worried about social bias and my tabular big data or, you know, how can I, you know, discussions about facial recognition or concerns about litigation? What if I get sued? Is this, has this application been optimized incorrectly for non-social bias? Like that's above the waterline. But as soon as you scratch the surface, here's what I'm seeing below is there's no diversity amongst those data scientists, right? That they're having trouble attracting women and retaining them and or minorities. They have no history of business resource groups, which is women and minority groups within those organizations, right? That form affiliations. These are the issues. That's why I'm saying this entire conversation about bias and technology is wrapped in a diversity, equity, and inclusivity wrapper, right? These are tightly aligned. I mentioned incorporating ethics into design thinking. These two URLs are gold. I highly recommend, you know, jot these down if you wanna check these out. But basically it trains design and development teams how to do this. How do you do this? You know, this is a very innocuous example you're designing a hotel rooms in room virtual assistant. Think of it like an Alexa in your hotel room that you're gonna be communicating with and you wanna make sure that whatever comes out of that virtual assistant isn't gonna be biased in any way. Isn't gonna be biased. So how do you do this? What are the ways in which you stress test the assumptions, right? Okay. Another thing to do again within the, with respect to culture in the organization I mentioned teaching cognitive bias and its implications on AI everywhere. And this goes from the top down and what it means with respect to work product. But then I also wanna make a point of saying in medical schools, two legislators are next governors, Congress people, presidents. Like we must have more people who are creating public policy associated with the space to be highly informed about the risks of things like bias and data. And of course my last one to all high school students if not middle school, like what you're learning today is so vastly vitally important. Again, what I would not do to have education like this really be proliferated at an earlier and earlier and earlier age. So you're seeing companies now deliberately taking stands around things like data privacy and responsible AI. In fact, IBM, our CEO wrote a letter to Congress about social justice and pulled out of the facial recognition market. There've been conversations. I know Elon Musk has talked about AI being a major existential threat. Note that the way he characterizes major existential threat is he says because it's not open, right? Because in which case he actually spun out the organization called open AI which is now getting pushed back because it's not open enough. So again, there's a push and pull with respect to what corporations stands are in the realm of artificial intelligence. Google additionally has been in the news with respect to how they built their AI ethics board and what their stance is. And I know now that they're also rolling out a consulting group with respect to responsible AI. It's again, a fast moving space. I mentioned the use of forensic technology to analyze data. I wanted to show you an example of what these labels could look like, right? This is, it takes the data that came out of the recidivism app the one I mentioned from ProPublica. And this is the kind of thing that you might get as a data scientist to be able to gauge, okay, is the data that has been used to train this particular algorithm is it biased or not? And again, my concern is I would want this to be as just easily understandable by the layperson as humanly possible. Standards and governance, you know, this will, this is an area where it will give you a framework to follow with respect to what an AI's life cycle should be, but also things like, again, what standards your organization is gonna stick by, how you build an AI ethics board. The World Economic Forum came out with a toolkit for organizations to teach them like, here's how you do it. You know, having whistleblower protections, anonymous submissions, rotating leadership model, make sure that there's no conflict of interest from anyone on your board. Then just to think through, again, not just what standards you're gonna promise to the market, but what are you gonna promise to your own organization, your own employees? Now you may be thinking, okay, I'm not in an organization yet. Like, what do I do? What do I do from the outside? This is something you can advocate for today. Again, education, education, education, because unless we've got people knowing about this space and really advocating and fighting for it, there aren't gonna be protections. There isn't gonna be a demand to do things like, you use an AI to make the decision I wanna know about it, have transparency, much less explainability, whereas as show me, tell me what kind of datasets you use in order to make this kind of decision. And I also wanna get across that. I think we in education have been making a major mistake in terms of packaging up this kind of learning about artificial intelligence and bias and AI and ethics and AI, et cetera, and saying, okay, this is only for those who self-categorize as computer scientists. How is that even possible that we're doing that? Why is it that we are not making this widely available to everybody? You wanna be your next generation's leading fashion designer? You should know the fundamentals of artificial intelligence because it is gonna change your world. You wanna be a leader in agriculture for your region? You need to know the fundamentals of artificial intelligence because it is gonna change your world. You're gonna be a citizen of this world that is gonna have decisions made about you by an AI. You better understand bias and data so you know what kinds of things to demand. We're doing a major disservice and we must be smarter with respect to how we forge a sustainable future in this space. The way I wanna leave this with you is that for our biggest aspirations as humans, our biggest aspirations, as whether it's traveling to other planets, turning around and vaccine faster, tackling climate change, we're going to need technologies like artificial intelligence to pull it off. We can't go backwards in time. That's not gonna happen. We can only move forwards, but we must, I urge you, we must move forward curating a smarter conversation, a much smarter conversation than we have been about this space. Okay, thank you so much for your time. I'll take any questions that you might have for me. And again, I'm truly honored to have been able to give this talk to you today. So, Phaedra, I'm gonna ask some of the questions that we've received in the chat and send them over to you. Okay. The first I'd like to start with is, we have a student who'd like to know what kinds of games that you build. Well, let's see. Mostly they were strategy-style games, mainly. We did also a number of just, I'd call them first-person storyline to teach certain kinds of concepts, some smarter city-style games that explain some of the concepts beyond smarter cities. Just a wide web. In fact, there's some great publications online that explain the serious game space and sort of what are the ways in which games can be used to help solve complex problems or create safe spaces for people to solve complex problems. I know Games for Change has this huge repertoire of examples for you to check out, but yeah, it's truly an interesting space. That's great. And similarly, if you look up womengamers.com, is that the website you were a part of? Is it still standing? Is there another way? It's no longer there. I, you know, I shut it down. My sister and I shut it down a while ago. Now we have a Facebook group where it does for the alumni hangout and chit chat. But, you know, I've moved on. Again, we've been doing all this stuff with IBM. And my sister now is a VR engineer in a game studio making horror games. So we've kind of moved away from womengamers.com, although obviously we're still advocates for women in gaming. Okay, great. This other one is a little bit longer. So I'm going to read it. It says, this is from Andy. Do you think there are any dangers to full transparency in AI? Open AI is on AI's cutting edge. They're aiming for openness, but isn't it counterintuitive to develop powerful machine learning algorithms and then publish it all for anybody to have used? Such a good question. And that's actually one of the major issues with the open AI discussions, right? That the open AI organization. Is it, there's a push that says it's not open enough. And yet, at least from a, from a commercial perspective, you could say, wait a minute, I don't want to give away my secret sauce on how I came up with this decision. But then there's also things like, if you were to use it, for example, in defense, it could absolutely be manipulated, right? Once I know how an algorithm is making a decision, I can determine what kinds of things to show the algorithm in order to get the decision that I want. So yes, yes. And I guess the question then, Andy, is, how can we protect people knowing this, right? Think about GDPR in Europe and the way that it's been curated in order to protect people's privacy. How do we then move forward in order to create standards to ensure at the very least that our data is not being misused in some way? Like perhaps explainability, you know, goes to a certain extent to ensure certain kinds of protections and standards. But then you're not also opening it up such that it can be manipulated through not intent. We have a follow-on to that. Do you think we need a new political system, a new form of governance to adapt to the onset of the age of AI? Oh, I love these questions so much. Okay, so we used to have an office of technology assessment in the United States government. This was an organization that was nonpartisan. It was technologists who, in essence, they couched that they were doing a civic tour of duty in government. And what they would do is they would work directly with legislators in Congress to give them advice about investments and public policy with respect to technology. And what happened was that back when Reagan was president and there was this big conversation about Star Wars, which was, you know, missile protection and outer space, the technologists in the office of technology assessment said, this is a really bad idea. We should not be doing this. And then members of Congress didn't like that answer and cut funding to the office of technology assessment. And it hasn't been resurrected since, which bugs me. Really, really, it bugs a lot of people. Bugs a lot of people. So I think there is an opportunity to bring back this idea of a civic tour of duty, not just on the national scale, but how awesome would it be to have this for the state of North Carolina or even on a county level, right? A civic tour of duty for technologists to just go in and help inform and help advise with respect to different kinds of investments. And I don't know if you watched the Mark Zuckerberg hearings, where actually there was a recent one on the Senate panel hearing, but, you know, so many of our legislators just fundamentally don't understand. They don't understand. So I agree there, we desperately need an infusion of knowledge. That's why I mentioned, like we need our legislators, we need our next generation of folks who want to inform public policy to really understand the kinds of things that I'm presenting today. Okay, we're going to go in a slightly different direction. Do you have any ideas or thoughts about transhumanism? Transhumanism. We think AI to become immortal, abolishing... Yeah, yeah, yeah, yeah. I'm not worried about that in the short term. As much as I'm worried about what I explained, you know? We're making really basic mistakes, really basic mistakes about what data do we use? What about even recognizing our own biases? Basic mistakes that's causing major harm. I worry about this more so than I worry about transhumanism and that somehow, you know, we're going to have our robot overlords, et cetera, et cetera. I worry about that. I also worry about re-skilling of workers. I worry very much about workforce displacement and that I worry about our school systems not being able to be agile and adaptable enough to get in front of the technology to re-think about how we approach education, not just the kinds of things we're teaching people, but how we're teaching people. These are the kinds of things that keep me up at night. I mean, you know, I grew up watching Star Trek so maybe that had an influence on me with respect to more of a technology utopia versus a dystopia, but I'm an optimist at heart. I really am. And as I mentioned at the beginning, I really feel that these are things that we can overcome, but we just have to be smarter again about how we're approaching it. Thank you for that answer. I think we share a lot of interest in some of those things you just mentioned. We have a student question about where would you advise a student to start if they're trying to educate their peers about AI? Oh my goodness. The North Carolina School of Science and Math has an excellent AI career. You guys are so lucky. Have I mentioned that? And so I think it's great what the school is doing. Again, not just with the summit, but the whole riding foundation. I think I mentioned the Harry Potter sorting hat as an example of using pop culture, right? I think pop culture can be a really powerful vehicle to capture people's attention, right? I think also the narrative, the storytelling, authentic narrative and storytelling to make it real for people so that they understand here's what this means. I wanna get it out of the domain of just computer scientists as I mentioned, right? So that this can be something that is digestible by as many people as humanly possible. I think that's I think the opportunity with respect to education. So whether that's documentary dramas or documentaries like what we saw with the social dilemma on Netflix or programs like this or journals like the MIT technology review. I mean, there's a lot of different resources but I think the key is make it real for people so that they understand what it means for them and that this is directly correlated, directly correlated to a lot of the social unrest that we're seeing today. A lot of the social unrest that we're seeing today. That same bias that we have walking around the streets is getting calcified in our systems. We have one last question before we're out of time and that is, is there a way to ensure that training data for machine learning is inclusive and how does AI capture the lived experiences of racial minorities? Remember those three pillars I showed you earlier about culture, forensic tech and governance. Those are the guardrails that I propose. Those are the guardrails that I propose so that when you've got a group of data scientists that are gathering or curating data, first of all, trying as much as you possibly can to have that group be diverse and inclusive is really key. Having an AI ethics board that's diverse and inclusive with all of those safeguards, having that process of incorporating ethics into design, picking the red team versus blue team. Like there's that whole culture piece and then the use of forensic tech to mine for data, flag for patterns of bias and then generate labels that actually fit to standards. Like right now those are the guardrails which we're proposing to put in place. We're actually actively going out and training organizations to adopt. So that's the key, but no, again, again, education, education, education, education. Organizations aren't going to do this unless people demand it. And people aren't gonna demand it unless they know about it. Which is why again, this program is so vastly important. So go forth and advocate for this space because it's so incredibly important. You're so incredibly important. Thank you again for your time today.