 And I'm really excited to share with you. Remember, one of the main themes is this concept of inclusion and how important having people with diverse backgrounds are to creating good artificial intelligence. Alexa, of course, is kind of where I grew up in AI. But I've now gone in and worked with computer vision models and specifically models that are looking at people's faces. I actually heard someone say, and you're in here somewhere, so sorry, I can't see you. But that there was this kind of article that came out about how Alexa was going to start using dead people, like your great-great-great-grandmother's voice to talk to you, right? Like you instantly are like, that's kind of creepy, which is maybe true. And I'll talk to you, for those of you in my breakout session, I'll talk to you more about that, but about the ethics of whether should we do that, should we not do that. But the interesting thing about that is that it has significant implications in different industries. I am super excited to bring to the stage the next keynote speaker, which is going to share with you a very unique perspective and journey. She comes from the world of crime and criminology and how that has shifted her into the world of artificial intelligence. Her name is Renee Cummings. I'm super excited to have her here. She teaches all over the world, similar to me. We share the stage on as many occasions as we can, trying to share with the world kind of the crazy state that we're in, where anyone can build anything. But it's so critical that we ask better questions and we take the time to think about justice. And so Renee Cummings is one of my favorite champions of justice. I also want to encourage you right now to think about questions as she's presenting things that came up earlier. If you have any questions for us at all, we're gonna do a fireside chat and just take your questions live. There'll be a microphone running around so you can go ahead and just ask out loud. I saw many of you actually talking out loud. So I know it's possible. So you can just ask your question on a mic. But if you are not interested, you don't really find comfort getting on a microphone in front of all of your friends, you can instead go to Slido. Are you, if you're familiar with Slido and how to get on there, ask a question, raise your hand please. No one, don't have to tell you. Okay, so you go to Slido, oh gosh, dot com. Anyone? I think it's s-l-i-d-o dot com and then you type in e-l-twenty-twenty-two. I'm looking for someone who knows that this is right. E-l-e-d-e. E-l-o, it's so close. E-l-c-twenty-twenty-two. And it'll take you to a Q&A. So this is, this is like, I'm legitimately giving you permission to get on the internet right now. Oh, it's on, there it is. Oh, wait, it's gonna go quick. So, Slido.com, E-l-c-twenty-twenty-two. And you can ask any question. This is for those of you who don't feel comfortable. I encourage you, though, to take the mic. Like, does anyone know T.D. Jakes? Anyway, he always says, don't drop the mic. Take any moment you have to ask your question, say it out loud. We'd love to talk to you. So now that we're prepared with Slido.com to ask you any question about what we're talking about, we'd love to share our journey, companies we work at, ethical questions we run into, and then, but without further ado, I want to bring Renee Cummings to the stage and let her introduce herself and then we'll get into some Q&A. Let's give her a round of applause. Good morning, everyone, and thank you for inviting me. Let me say it is certainly an honor and it is always a pleasure to spend time with thinking minds. So I'm Renee Cummings and I am a data science professor at the University of Virginia. I'm an AI ethicist and a data activist. So an AI ethicist is someone who looks at ways in which we can maximize the benefit of artificial intelligence and reduce or minimize the unethical approaches to AI. A data activist and a data ethicist is someone again who is really interested in bringing diverse, inclusive, equitable, and justice-oriented approaches to the ways in which we think about data. So I will get into that just a little bit, but I'm gonna give you some of my background. So many, many years ago, I started my career as a journalist and then I worked as a broadcaster for several years and after that, I became a sportscaster. Now after I did journalism, I decided the flip side of journalism was public relations and government relations. So I decided to try my talents in public relations while I was doing that, I spent many years working in music. I worked in the music industry in New York for quite some time and then I worked in fashion. And while I was working in fashion, I decided at that point I wanted to become a criminologist. I was already a substance abuse therapist working in rehabilitation at night and working at the Fashion Institute of Technology in New York during the day. So I decided to become a criminologist and a criminologist is anyone who deals with lawmaking, lawbreaking and law enforcing. And while I was studying criminology, I decided I wanted to put a little bit of forensic psychology in there and I became a criminal psychologist. And a criminal psychologist is in its most casual approach would be like a profiler. So I spent a lot of time in the minds of perpetrators thinking about ways in which individuals commission crime, looking at the foreground and looking at the background factors of crime and criminality. And then I decided to merge all of that and I wanted to find out a little more about the prison system. So I decided to specialize in therapeutic jurisprudence and that simply is believing that the law must always be used in a therapeutic way when it comes to dealing with individuals who are incarcerated. So it's really about ensuring that we do rehabilitation, restorative justice and we ensure individuals who are returning from the criminal justice system have an opportunity to succeed once they're out. So by now you would realize that I have a very non-traditional path to technology. I have a very unconventional way of thinking. So my contribution to technology is what I call the interdisciplinary imagination and that is where diversity comes in. So one of the things we're realizing more and more in technology is that we cannot do data science by itself. We have got to do data science and artificial intelligence and machine learning and deep learning and all these extraordinary things that we are doing to make the world more interconnected and to do these big things with technology have got to come from an interdisciplinary place. So I would say when people ask me that I really specialize in justice and one of the things that I have tried to do since I entered the world of AI and data science and big data and AI ethics is to ensure that we bring a justice oriented and a trauma informed approach to understanding the ways in which we use data. So I'm gonna tell you a quick story about data. Data at this moment in time is probably one of the most, if not the most powerful commodity that companies are now trading in and each of us because we're using technology if you're an Instagram, if you're on Facebook, if you were on Twitter, if you are on any platform, if you're doing internet shopping, if you're doing research for school projects or if you're just Googling the things that are interested or interesting to you that you over the last few years have created a stockpile of data and there are companies who have access to that data. Most of them are using that data to profit because what that data does, it gives companies an extraordinary insight into the things that you are thinking about, the things that you like, the things that you dislike and the things you are really hoping to do and that data is not only being monetized but it could also be weaponized against us. So much of my work looks at questions such as data privacy, safety and security, ways in which we are using data ethically, how if we are building technology, we are building technology that is equitable, technology that is inclusive, technology that is diverse. So I spend a lot of time with data scientists because I teach big data ethics at the School of Data Science at the University of Virginia and what I say I try to do is really stretch the imagination of data scientists and technologists and scientists in general. So we can have a more intimate, real-time understanding of the benefits and of course of the parallels of data. So one of the things that I hope to just get across to you because I know Noel and I are going to have a very intense conversation is the need for you and it's something that I say all the time to invite yourself to the party because there's an extraordinary party happening at this moment and it is called artificial intelligence. What it needs would be a plethora of individuals coming with different backgrounds. So if it is you are thinking of a career in math or sciences or sociology or forensics or economics or psychology, whatever it is you are going to have to study data science as well. So it is really coming back to what I speak about when I talk about this interdisciplinary imagination. A lot of my work also looks at the ways in which criminology and criminal justice intersect with technology, surveillance, surveillance technology. So once we step outside our homes and actually you don't even have to step outside of your home, once you have a phone inside of your home and wherever you take that phone you are being surveilled. Someone is collecting that data. Someone knows where you're moving inside of your home. When you get out of that house then you have certain points across every city space where we are seeing surveillance being deployed. For some places public safety, some may say it's about tracking, tracing and terrorizing individuals in underserved communities. So we're also going to speak a little bit about criminal justice and AI and predictive policing and the things that you want to talk about. And one of the things that I'm going to share with you as well is a project that I have been working on at the University of Virginia and that project is called a digital force index. And what I have done with some very up-brilliant data scientists is that we have created a digital force score that will soon be public and you would be able to put in your zip code and access all of the surveillance technologies that are being deployed against you. So you can be better able to understand the power of technology and the power of technology could impact your life. So really I just want you to engage with us in the conversation as much as you want and also consider this at this moment in time you are so well positioned because you are going to be entering the workforce at a time when technology is blooming in ways that you are probably not yet imagining and you have an extraordinary responsibility because you have a responsibility to decide how technology is made to decide that all of us need to be included to decide that this technology is going to be inclusive and equitable. So this is your moment in history to ensure that your name is written in the design development and deployment of technology. So if there's anything that I could encourage you to do as we get ready to have this conversation would be to engage us, ask us all the things that you want to ask. As I say to you as Navella comes on the stage, you are really well positioned and I'm hoping that from my own experience you would realize that you can enter technology, you can enter artificial intelligence and data science just about from every discipline necessary because all those disciplines are required to be part of the development, deployment and adoption of this extraordinary technology, official intelligence. Thank you, oh yeah. I guess thank you, you're very good, well done. Good experience, always like, all right, let's go, let's go build something. Okay, you all are well done with the questions. I am scroll, wow, awesome. It's not too late though if you want to have your questions. Also, if there's anyone who has a question in the audience, live studio audience, you can raise your hand and we have a mic runner. So the context really is anything. If you have a question for either one of us, we'll both be answering all the questions and then also really a chance to, you know, find out a little bit about what it's like to be, I mean we're both women of color in a non-women of color industry. We represent divergent thinking and have experienced a lot of crazy stuff. So if there's anything you want to know about what that journey looks like, how we got into it, just don't hesitate to ask. But I do know how hard it can be to ask in a room. So I'll let you warm up, anyone? Anyone want to take a chance? All right, yes, got one. Phew, cool, thank you, yes, right over. Oh, okay, you're next. Behind in the hoodie, he's gonna go next. Okay, great. And just let us know who you are before you. All right, I'm, sorry. I'm Nova and question for both of you. How would you say the thought process of appealing to as many communities as possible is? Like, I guess to put it in similar terms, how would you describe your steps in thinking about certain questions to appeal to the general audience? Also, how does that leave certain communities unnoticed? I didn't get the last part. How does it leave certain communities unnoticed? Unnoticed, okay, got you. So I would say all of my work focuses on justice and justice is one thing that no matter where you are, which community you are from or how you think justice is something that's important to everyone. And I think I come from that space that many of you may have read or heard that quote from Dr. King, right? Which is injustice anywhere is a threat to justice everywhere. And that is the premise for me. That has always been my platform. So what I do in particular when it comes to AI as an AI ethicist and a data ethicist and a data activist is to ensure that using my own voice, I always bring voice and visibility to all groups. For me, it is about representation. We cannot be building new technology that includes old biases and old stereotypes and systemic challenges that are baked into our data sets and historical data sets that are being used in the criminal justice system to create these risk assessment tools that are creating these zombie predictions or developing technology that is just revictimizing and further criminalizing young black, brown individuals in particular communities. So it's always about justice. The other thing that I do is I bring a trauma informed perspective to data science. Data has a memory much bigger than the data set. And if we continue to use historical data plagued with traumatic experiences for certain groups and communities, then we are using new technology to replicate old, bigoted, stereotypic thinking that is just not what we want to do with new technology. So when it comes to leaving communities out, I think I give voice to all communities. When I think about diversity, it's not only age, gender, sex, sexuality, it's about ability, it's about disability, it's about generational wealth. I'm also committed to legacies. And much of my work looks at how do we use technology to bring equity in the legacies that each and every one of us are able to build because so many of these algorithms that are being deployed are deployed without the life experience, without the emotional intelligence, without that justice informed trauma informed approach. And what it is doing is continuously denying certain groups access and opportunities. And I think in my work, I continue to bring voice and visibility and ensure representation of everyone because the future is not only for some, the future is for all of us. Yes, that's a great answer. I will invite you all to think about something that I do. I use a interesting little tool called Google Alerts. I don't know if you know it, but you can go and I type in a term, it's called AI gone wrong. And so to answer your question, often what I try to do is learn from those who are going before me and try to see where are people missing the boat. I thought it was really interesting over the last four years that many of the major technology organizations have decided to opt out of using facial recognition in production because of the flaws that Renee was just talking about because it is the facial recognition systems are not being trained on a diverse enough group of people. And it's interesting that those companies chose to quit as opposed to keep going, right? AI is not a finite solution. You don't build it, deploy it and forget it. You actually continuously train it, right? Like Marshawn Lynch, like you continuously get data. So to your question on how do we accommodate like as many communities as possible, we keep learning. We don't give up when it's not working. We realize it's always a data problem. It's always related often to a bias problem. How do we mitigate that bias? And a Amazonian, like at Amazon they have a set of leadership principles and one of them is learn and be curious. And so I always say, remember the last slide I think it was like learn by doing, always be learning. When you never are done, when you deploy a model in order to make sure it's serving people, you have to get it in front of as many people as you can and get their feedback. Some of you will be drawn to user experience as a profession and it is desperately needed. It is how do people use software? How do people engage with AI? We don't have enough of those people in our fields. As a matter of fact, when I talk to a data scientist they rarely even recognize that UX is a thing. And that's often those same groups are the ones that end up running into trouble when they go to production. So I think always be learning, always be building, always be adding. And there's just one thing I would add. I always, each of us represents many communities. And that is one of the reasons I try to encourage so many young people to invite yourself to the party. AI and data science and technology needs more diversity. And if you represent several communities as we all do, it means that no community is going to remain unrepresented if we get that equitable, inclusive, diverse approach to technology. All right, we'll take the next question. And we'll take the one from Slido that's getting all the upvotes right now. Hello, oh, hello. My name is Stuart. And as someone who's sort of growing up in an era where every other day we hear and see headlines about big tech companies and new technologies being used maliciously, you know, for privacy or censorship or whatever, as people who have insider knowledge into the sort of future of technology, how do you realistically see the future of technology and how does that excite you and concern you? Certainly, thank you, Stuart, for that question. So I'm super excited about technology. That's why I got involved. Technology is happening and it continues to happen and it is going to happen with or without us because that is the future. When we look at the history of technology, we look at the advent of what was called the spinning Jenny that was able to turn cotton into threads to create fabric and how that changed the textile industry and how that changed the work of capitalism when we think about the printing press and we were able to for the first time move from speaking to reading books. I always say that, I mean, I wasn't there for all of those things centuries ago, but I'm here for this technology and I'm going to participate in it. So we can do extraordinary things with data science and we are seeing it when it comes to every discipline. Data science is changing the methodology, the modus operandi, the approach. When we think of what we can do just in communication, what we can do in healthcare, what we can do in climate resilience, what we can do in food justice if we are bringing a data-driven approach to the ways in which we are making decisions. Data science is about decision-making. It's about decision-making accuracy. That's why companies want data scientists. It's about business intelligence. It's about the bottom line. They want to make the best decisions. Now, the things that we are seeing with AI are extraordinary, but the things that we are also seeing have the potential to harm if we don't bring an ethical approach to the ways in which we are doing it. And an ethical approach means looking at issues such as accountability, transparency, explainability. Of course, always diversity, equity and inclusion or a lot of people call me a Jedi in AI because I speak about justice, equity, diversity and inclusion and the first letter of each word, it's the Jedi acronym. I like that. That's nice. So it's happening and it's happening all around us and it's happening with us because it is our data that is being used to make these decisions. It's our data that's being used by Big Tech to do all the things that they're doing. And if you think about it, how did Facebook create billionaires? What did they sell? What's the product? They sold us. They sold us. When big companies are designing surveillance technologies or scraping the internet with all our family photos and all our great photos and all the things that we like and we don't like and all those emojis, do you think those emojis really care what we think? No, they don't. Those emojis are ways in which you can generate data and this is what data science is about. So although it's doing these extraordinary things because it is so powerful, because it is so pervasive, if it goes unchecked with an ethical approach, it could create a lot of trouble. And the groups who are impacted the most continue to be the most under and unrepresented groups. And this is why if we're deploying something so powerful, it is so critical for you to get involved because what is happening is an algorithm which computational formula that gives a computer a decision-making ability, makes it think like a human if we are talking about AI to make a decision about you and what an algorithm does, it oftentimes makes a decision about your future, your future, your ability to earn wealth before you even know. So sometimes you may apply for a credit card or you may apply for a loan or your mother or caregiver may apply for a mortgage or you may apply for a scholarship or there's something that you're applying for and you are denied and you are not even cognizant of the fact that data was used to create that decision and you were not even aware of that data was used against you. And when we think about things like disinformation and defaced and the ways in which bad data or data that's used without an ethical approach is undermining things like democracy, you have got to ask yourself where do you stand in this conversation? So I am committed to ensuring we do brilliant, transformative and extraordinary things with technology but I'm also committed to us doing that in a way that is always justice-informed and always trauma-informed, understanding that we have got to ensure that all communities are being treated ethically in that algorithmic space. Awesome, that was amazing. I have often, when I'm often, oh gosh, I'll tell you this funny story. I was in DC walking in monuments, right? Like I went to the Lincoln Memorial and I was wearing a shirt that said I heart AI and someone yelled like a human, yelled out at me, all AI should be destroyed. Like yelled it out in public. To a strange, like a woman walking down the street, all AI should be destroyed and I immediately thought to myself, which kind of is a segue to one of our questions, they watched too much TV. Like if they knew, if they knew what it did for my dad or my son or how many people in, like anyone see Top Gun, the new one, the Maverick? It's a good movie, you should go see it. But I went to Embry Riddle Aeronautical University, I'm an aviator, so I've been watching Top Gun for a long time. But in this movie, magic happened. Val Kilmer, who is an actor, lost his voice and he was one of the first celebrities to use voice prosthesis. And so in that movie, though you see his lips move, he actually cannot speak, his vocal cords no longer work. But because he created so much audio of his history, they were able to recreate his voice and let him live another day. Everyone know the voice of Mufasa or maybe Darth Vader, if you're gonna go dark, right? What's his name, Mr. Jones? James Earl Jones. He recently signed off the license. He's in his 90s, he doesn't wanna do that anymore. But he signed the license off to an AI company to recreate his voice so Darth Vader can live on forever. Oh wait, Mufasa can live on forever, right? Like Darth Vader can anyway. I'm an equal opportunity AI voice prosthesis work, but that is the future. Can it be used for harm? Absolutely. Today we are seeing hackers go into organizations and call replicating the CEO's voice because that information, that investor call that he did is public information. They take that call data, recreate a synthetic voice also known as a deep fake, sounds just like the CEO and asks very reputable senior leadership to do something with money that they shouldn't be doing and they don't question it. Like it's scary times, but it's no scarier than it was 10 years ago or yesterday or like they're always bad actors. So to come to the moral of the story, it's never been easier though to be an evangelist, to do what we do, to take a stand, have an opinion and talk about it. And that's probably the biggest thing you can do to get underrepresented voices heard is to represent them, to be willing to have an opinion and state it out loud for the world to hear. There's a really good question on here. We have to honor cause it's like, wow, up to 33. That's awesome. It's up voted like crazy. There's a second right after it we'll ask as well. And this one I guess is a little bit to me but actually think you have a very interesting perspective on it. Does Alexa actually listen in on your conversations? I'll ask you all, how many people think Alexa actually listens in on your conversations? All right, and you would be right. Yay, okay, next question. No, I'm just kidding. So the only way for these devices to work, right, is to listen to you. It's the only way. It's called basically automatic speech recognition. It's automatic. How's it gonna be automatic unless it's listening? So it's listening all the time. But if you think about sound, right? And the sound, it makes a wave, right? My voice right now is making a very specific wave file. A very specific sound wave. And that's like a fingerprint. And I, on Alexa, I on the device as a developer can say, okay, I wanna listen for this fingerprint. That fingerprint happens to be Alexa or Alexis as the case may be, right? But it happens to be Alexa. And that fingerprint, I'm waiting to hear it. So I'm listening to a bunch of stuff, but it's kinda like, waw, waw, waw, waw, waw, Alexa. Oh, I know that wave. I know what that means. It doesn't understand everything. It is not a human that you're talking to. It's a piece of software. And that piece of software only understands so much. Now, if you do say Alexa though, that puppy is streaming everything to the internet. Like right up into the Alexa cloud, you, like that data is now preserved forever unless you go and turn it off. The challenge though is how many people know how to delete all their utterances from Alexa, right? Oh yes, good. There's a few of you, go learn how. Because it's your data and Amazon is doing well with it. Here's the kicker. They asked you, when you downloaded the app, when you installed the app, they said, do you mind if we get all your data for free to make this experience better? And if you're using Alexa, you said yes. How many times do we do that on our phone, right? We install an app and we're like, scroll, scroll, scroll, enter. Like click, click. Whatever it said. So I think just to compliment that, I would love Renee, your perspective on this concept of privacy, because that's ultimately what the question is about, right? Like what's up with privacy? How does it relate to Alexa always listening? Well, I would say there is no privacy. Privacy is dead. In the world of technology, forget it. There's no privacy. There is no privacy. What we are trying to negotiate now is privacy because technology has taken that away from us. And one of the things you would realize is that technology always moves like ears ahead of the law. So the challenge right now is how do we regulate AI? How do we regulate data? How do we regulate technology? And that is the biggest challenge, the inability to regulate in real time. On Alexa, one of the things that I would like you to Google is how many times Alexa's name has been called in the courts, particularly in cases of murder and domestic violence. And now that Alexa is actually giving evidence in many court cases, in particular homicide. And just think about that. Interesting, all right. Okay, so next question. That was awesome. Thank you for that question. Next question, which is now, oh, okay, they automatically get removed. Okay, great. What is the most, I like this question. What is the most accurate depiction of AI you've ever seen in movies, TV or books? I would say there isn't an accurate description because the biggest challenge at the moment in AI scientifically is coming up with an accurate definition. The most accurate definition of AI is still very much undefined. One of the things that we do in media is we use these very imposing images of robots and robotic fingers touching human fingers and robots just stepping out of the stratosphere. And what it does creates a lot of fear oftentimes or it makes you believe that AI is still coming or AI is not totally here. But it is here and we've been using it and we've been using those algorithms that continue to misbehave. And one of the things I try to do in my work would be reduce fear with education which means demystifying, democratizing and decolonizing our data and our algorithms and ensuring that you understand the power that you have because it is your data that is being used to build a future that could very soon exclude you. And that's the place you don't want to be. Yes, I was raised on the age of science fiction. It's like 1945 through 60. My dad had me reading Asimov, the iRobot, if you've ever heard of that movie, it comes from a book like Bradbury, Arthur C. Clark. And so it's really interesting to me because I actually see in all those books, I'm like, yes. But one thing that's different about science fiction many times, especially older science fiction, that it's very metaphysical, it always is trying to teach you a human lesson through the lens of robotics or through the lens of artificial intelligence. And so I actually do like is, I don't know if I should even say what I like. But there's a movie, an aspect of a movie called Her. Anyone seen it, heard of it? Don't go watch it if you haven't seen it. But if you have seen it, this movie is about contextual AI that you would put an earbud in your ear and you'd have an assistant that did everything. It actually goes in connection with the question, which is on general intelligence, artificial general intelligence. In other words, one bot to rule them all. Like, will we ever have a robot that has sentience or is conscious, can make its own decisions? And I will tell you there's both the academic version of this and the consumer version of this. From a consumer's perspective, we will achieve this general intelligence, but it won't be one thing, it'll be thousands of bots. We are building these bots today where I could go to a device like an Alexa device and I could actually have it pay for my Starbucks, order from Uber, book me a flight, get my hotel, give me an itinerary, be my travel agent, be my homeschool care provider, manage homework schedules for my kids. It seems like a robot that knows a bunch of stuff, but actually it's one interface with a bunch of little bots running around and doing tasks. Those little bots are known as narrow AI and we as a society are very good at that. Actually, as humans, we're very good at that. I don't know many humans that are lawyers and doctors and physical therapists and massage therapists and yoga instructors and marathon runners. Many of us pick a thing or a few things, but we don't do all the things. And AI is very similar to that. I think its capability is similar to that. So I'd like to ask Renee to reiterate on that question of general artificial intelligence whether you think AI will become conscious or what's your thought on that? Well, I would say definitely not. I don't think we would ever get that human when it comes to technology. I do agree with you. What we are going to see would be more that concierge approach, personalization in AI. So AI doing more and more of the tasks that we need that we don't want to do. Whether or not we're gonna have technology that is undistinguishable from a human, I don't think it's ever going to happen because it's about collective intelligence. Humans are building the technology. The technology's not yet building itself although the algorithms are learning from themselves. And there are some things we could never replicate would be emotional intelligence, would be life experience, would be a human smile or a human hug. Those are things that technology would never be able to do. And something that I always say one of the reasons it's so important to have an ethical approach to AI and data science and technology is because we do not ever want to get to the point where we're now building technology and algorithms to teach us what it means to be human again. There's something you feel as a human being when you see another human being that you could never build in to technology. That consciousness, that spirit, things like faith. You're just not going to see that. But what we are going to see is technology being more and more incredible and more and more transformative in the kinds of tasks that they are going to be removing from us doing and of course bringing more of a personalized approach to the ways in which it could be done. More of a niche approach as well where you have a piece of technology that understands the things that you need and you want and can deliver in real time. Awesome, thank you. We have a question here in the back. And then we'll come to the front. Okay, oh my gosh, yeah, you guys are awesome. I know, all right, we're gonna start with you. And then we'll have to hit this side because you guys got no love over there. Maybe we'll go to you. Okay, go ahead. Yes, thank you. How you doing? My name is Antonio Barnes. I teach at our Southeast Raleigh Magnet High School. I'm also an Emory Reddle graduate. I'm an alumni. One of the things that I always like to show my students is a video called A Tale of Two Cities, How AI and Robotics Will Change America. But one thing that I do want to ask you a question about is how AI does leave some cities left behind because again, everyone thought years ago that after the 08 crash that all of our jobs went to China, that was not true. Our jobs were taken away by automation. I live in Greensboro, North Carolina. The Greensboro with triad era itself at one point. In that movie, we were fourth in the United States in automation, which is great on one end for some, if you're into technology, it's bad on the other for those who are not. How would you address that, the need to bring others up to speed because it's showing that some small cities or towns are becoming rust buckets because of AI and automation. So how would you address that? Thank you so much for the question. And it really brings us to a very interesting area of data science and AI and algorithms, which would be the smart city. And what we're seeing internationally would be many cities investing an extraordinary amount of money when it comes to building algorithms that are making cities more effective, more efficient, and deploying a certain amount of service delivery excellence. The challenge that we have particularly in our American cities would be the fact that many of our cities is where we have our underserved high needs communities as well. So what we have been seeing happening, and this is again a reason why data and AI ethics so critical to the ways in which we are designing and developing and deploying technology would be that an extraordinary amount of investment in areas of cities or in communities and cities where you have persons who are making more income, people who are more educated, people who are looking for a different kind of experience. Now, with all technologies, some jobs are going to be phased out and new jobs are going to be created. The most critical thing that you can do at this moment and I continue to say you are well poised, you are well positioned because you, in this room, we're looking at the future. This is not the future here, this is the future there. And this is why it is so important for you to upskill in real time when it comes to your understanding of the long-term impacts of technology and society. The extraordinary things that can be done with data science and AI, understanding algorithms and understanding the decisions that you make about your life every time you share your data. So yes, some cities are going to blossom and some cities are going to decay but one of the things that you want to ensure is that what activity you are in, you are making a collective decision to ensure that that city prospers and that you are a part of the development, the progress and the prosperity of any space that you are in because you have upskilled yourself in real time with an understanding of how to use data and how to employ our data in your everyday life and in the ways in which you are doing business and the ways in which you are thinking and the kinds of approaches and disciplines to study that you are going to embrace as you leave high school and move into university. Awesome, yes, and I would say that education is changing. So I think that's the other thing is exposing more people to the opportunities. Like it's before it used to be like you have to become a coder, like that was your option. You have to become a very technical person to be successful. Now with AI, there's a broad range of talents that are needed in this space, but most people don't realize it. So I spent a lot of my time in high schools, middle schools, junior and community colleges educating people on what opportunity is there, but yeah, it is a shift. We're gonna have to move to a new skill set and I've found that this is a struggle, but I'm hoping that we can create more diversity in this new world. We have a new workforce we're building. So it could have all of us in it. It doesn't have to repeat the same patterns we've seen, but sadly right now it is unless we change something. I wanted to take one thing there. I think one of the things that AI and data science and technology requires, it's imagination. Because imagination is what is being used to build the things that are going to benefit us and some of the things that may harm us. So imagination and I think each and everyone in this room has the power of the imagination and that is what should move you in the direction of technology. Awesome, thank you. Yeah. Hi, thank you so much for this. This has been so interesting. I have two questions. So first, I'm really interested in how companies sort of collect data and I'm wondering how governments access that data and under sort of what rules and what parameters. And also maybe a little easier. What are three books that you would recommend? You could start and I could finish it up. Okay, great idea. Let's see, government. So here's the good and bad news about data. There's lots of it. Everyone has their own and no one's sharing. So it would be interesting and nice if the world gave us a vehicle for contributing to data sets and getting paid as an individual for participating in those data sets since they're making billions and billions of dollars on us. So that'd be awesome if we could get credit for working in these data sets. But that's not, of course, how it works. So instead, the government releases hundreds of thousands of requests for proposals to build unique data sets for their use. And in all of them, they look very similar to the YULA. We all sign with no conscious thought to get an app on our phone. Many times applications that you're using for an entertainment purpose are actually funded by a government grant to collect that data. And that information about that is in the YULA you casually scroll through and accept. It's not hidden from you. It's just we as a society are just impatient to install an app, right? We're like, oh, I want this thing. We don't read it. My dad ended up reading in his cognitive issues, 75 app YULAS, when he was getting better. And several of them said that if we had to return something, we'd have to go out of the country to return it, and it had to be returned in person. And I was like, well, that's odd. I probably should know that if I'm ever gonna buy anything. But this is a huge opportunity where they put the burden on the user to know better, but we're not educated enough to know where to look for that data, how to read that data. Many of it is legalistic and uninterpretable. That's why what you had mentioned about the principles of ethical AI fall into transparency and explainability. Can a normal person understand what we're doing? Cause that's part of our responsibility. So I think that's part of it. And the three books, oh goodness. I'm just gonna say one, I can't think right now. No, I can't, but I'm gonna wait and see what you say. Okay, so the books are a lot because I spend most of my time in books. I will give you three, Weapons of Math Destruction by Kathy O'Neill. And Kathy is also a visiting scholar of the School of Data Science at the University of Virginia. But it's also considered one of the foundational texts to really understand the implications of technology and society. Dr. Sophia Noble's Algorithm of Oppression. Definitely, you would want to read that to see how algorithms have oppressed us and continue to oppress us. And my third would be Race and Technology by Dr. Ruha Benjamin, which really gives you an understanding of why we need to bring a justice and trauma informed approach to the ways in which we're thinking about technology. So this is the challenge. When it comes to data, there are five companies who own all the data in the world. And that would be Google, Amazon, Facebook, Apple, and Microsoft. Government collects data through the census. And that is the primary ways in which governments collect data. The challenge is this. When it comes to regulation, the most stringent regulation is really imported into the US. And it is the GDPR, the General Data Protection Regulation coming out of Europe as well as now the EU AI Act, which is mostly, at this moment, still being discussed to turn into legislation. Now those two approaches, the European approach to data and AI, very strong legislation with teeth, really looking to protect people against those negative impacts. In the US, we do not have that kind of legislation just yet because so much of the compliance and the auditing and the legislative approach driven still by the technology companies. And there's a major lag between innovation and the law to regulate innovation. And it's in between that space where those big tech companies do a lot of the loophole jumping and are able to do the things that we don't know they are doing. Now when it comes to government, I will say I've had the opportunity within recent weeks to speak in front of several government committees and hearings, particularly out of the White House, the Biden administration has a national AI advisory council and I spoke to them about ways in which we need to do data science and AI ethically. I know I have another one coming up, another sitting or hearing with a White House body that looks at facial recognition as well as our surveillance technologies. The challenge is this, your data is being collected. The thing about data is that we can't see it. So you can't see those data points, but everything we do, from smart phones to smart lights to your GPS to cars that are connected to everything else in our lives, the internet of things, just everything is so interconnected that those data points are being created. Now one of the most interesting things coming out of the White House would be a blueprint for an AI bill of rights. So you can Google that and what it has established is now that we need to bring an ethical approach to the ways in which we are doing AI. And if you know about the bill of rights, the original bill of rights, I don't think any of us were around when that was written, right? But we are around where the AI bill of rights is going to be written because what we are seeing is that a simple thing like an algorithm has the ability to undermine your constitutional rights, your civil rights, your civil liberties, your human rights. And what we are realizing is that it is taking away agency and autonomy and independent thought from us. And what we've got to do is rewrite that bill of rights to include the impact of AI, which is creating extraordinary havoc as well as it is creating extraordinary progress. Oh, let's do one. Yes, then all the hands come up, let's go. Okay, maybe we'll take one from the audience. I'm gonna call many hands you have. I know. What am I supposed to, wait, wait, wait, wait. What we can do is let them all ask and then we kind of address it in the end. Oh yeah, that's a good question. We can let them all just ask questions and then we just will sum up with that result. Okay, so go ahead and just pass it down. Just ask the questions. I recently read a book called Feed by M.T. Anderson which is a satirical commentary on corporate America published back in 2002. Some characters in the book are connected to this thing called a feed which, you know, it generates ideas and products based on their own thoughts. Do you believe that AI and technology in general promotes or will promote consumerism in the future? The reason we all have an iPhone is because it already has promoted that. The reason we are shopping like crazy on Amazon and all those packages are coming in front of your door from this week into Christmas is because it's about capitalism. That's what AI is about. It's bringing that technology to the marketplace for us to become the marketplace. And even surveillance technology does not really reduce crime, but governments and cities are spending an extraordinary amount of money on this. We're already in the AI capitalism space. What's going to happen is more and more and what you're going to see is a wider gap between the haves and the have nots. So it's really going to create an extraordinary underclass if I don't get more of you to enter the world of AI ethics and data ethics and data activism. And really quickly, one of the things you'll start to see, I'm working very closely with a company called PayTalk. And what they allow you to do is use your voice to pay for everything, but it works with Fire TV. So if you're watching Fire TV and something comes up and you're like, oh, I want a Starbucks, you could just talk to your Fire TV remote and it orders it for you. This is the future. But again, how many people are going to have Fire TV with a remote that they can talk to with an internet connection stable enough to facilitate that with a Starbucks Uber thing already set up, like it starts to tear point haves and have nots, it starts to really segregate who's going to be using this technology and who isn't. Thank you for the question. What do you think that an individual person can do to most protect themselves from profiling and just data collection? To protect themselves from profiling in and list data collection? Data collection. To protect yourself from profiling, that's one of the greatest challenges ever. But what you need to do is educate yourself and understand what your rights are and understand what your responsibilities are and really participate in the kinds of decisions that are being made at the government level, the local level, the city, you know, wherever you are, even at the student level of the things that we're going to accept when it comes to data, looking at questions such as privacy and of course, the protection of your privacy and the fact that you want to opt out, that everybody wants to be involved in technology. So there is that still that right to opt out and to be forgotten, which is one of the things we are unable to do at the moment because of data. Great, last question. This young man in the orange, and that'll have to be our last one. Yes, okay. So my question is more aimed at miscomings. You said you worked in crimeology and my question for you is, how is tech and AI used in that field of work? Criminology, yes, how does tech, so what we're seeing is that criminology is a great space for technology. So everyone wants to design something to reduce crime, something to catch the next criminal, but it's also the place where we're seeing the most bias, the most discrimination and the most amount of stereotype because what everyone is designing is technology to catch black and brown men. And what we're also seeing is an extraordinary amount of pseudoscience, which is using things like DNA to come up with an image. And that image always is a black or a brown man or a black or a brown person. So a lot of pseudoscience, which is poor science and what we're seeing is the replication of just the same old stereotypes and things like predictive policing and things like surveillance tech that are really, again, tracking, tracing and terrorizing. And can I just ask you for one favor, master and master? Just what's your question? He's been so attentive, please, let it go. Yes. And it's definitely not ethical. It's never going to happen because there are some careers that always will need humans. AI and technology need humans in the loop and the EU AI Act is also looking at ways in which we are using algorithms to interview individuals. Many of those approaches to algorithmic interviews, HR deployment of algorithms and video interviews and just using biometrics in very strange ways. The EU is actually looking to upload that because it finds it to be high-risk approaches and we shouldn't be using it. So don't be fearful, educate yourself. And as I always say, invite yourself to the party. Thank you so much, Renee. Thanks for this time. Thank you.