 All right, thank you All right, my name is Joe Mocznik. I would like to warmly welcome you to Hay Library today so excited so Remember when was the last time that we had an event and it got totally let's call it sold out So we booked this place and we have two overflow rooms. So hello if you're in one of the overflow rooms I would like to welcome you to the streaming and live Event that is going to go on. I'm going to briefly introduce our speaker Mutale Mkonde. How does that sound? good, thank you and Then I'm going to move myself out of it and Mutale is going to take over. So let me first start with Thinking three individuals who really were the brainpower behind this idea to have our third library lecture series focused on artificial intelligence So Jason, would you mind standing up, please? Jason Coleman Alice Anderson and we have Carol seven who is in Hawaii good for her So we have a group of individuals in the library who focus heavily on AI Artificial intelligence and at least for a couple of years they have kind of put their foot forward and said well What is this AI? I think what should we do with it? How could we apply it and I think instead of you know when you see Major obstacles you have a temptation to run away from it I think we have a group of people in the library who runs towards it and have together with much much larger group of Individuals right now in the library. We have a really solid cohort who is actively pursuing artificial intelligence Especially artificial intelligence literacy and some other aspects to figure out how can the libraries? How can we here at K-State help our students faculty researchers perhaps community because we land grant institution to really achieve some of this wonderful Accomplishments as we utilize this pretty sophisticated fairly disruptive tool and find a way how to how to utilize these things I would like to thank the entire library staff special things go to our marketing department and many other folks who have helped out with this Let me acknowledge the Significant dignitaries here. We have a vice president for diversity equity inclusion and belonging here. Thank you so much for joining us If there are any other deans we piece and such please I'm very appreciative that you are here or online For joining us and thank you for all of us for the rest of you who is joining us So today we welcome Mutale and Kanda a visiting policy fellow at the Oxford Internet Institute and CEO of AI for the people A non-profit focused on using popular culture on promoting policies understanding algorithmic bias. I tried this three times. It's not easy She has been a fellow not easy to pronounce just imagine how hard it is to do it. Okay She has been a fellow at Stanford University Digital Civil Society lab Harvard's University's Berkman Klein Center for Internet and Society and Notre Dame's Institute for Advanced Study in her capacity and AI policy advisor to Congress woman Yvette Clark and the conda Contributed to the drafting of the Algorithmic Accountability Act and deep-fakes Accountability Act and the no biometrics barriers to housing act Condas writings have have been published in the Harvard Business Review the Harvard African-American policy journal Miss magazine the New York Times and the Washington Post She earned a bachelor's degree in sociology from the Leeds Metropolitan University and a master's degree in American Studies from Columbia University After the lecture, we'll have time for a brief question and answer period before we adjourn to the fifth floor gallery for wine and odors I would just like before I welcome Mutale to speak we were just exchanging we lived to but at the same time in 1990s in London, so I was kind of in North London and she lived in North London So we have kind of might have seen each other in a previous Century or millennium, so I would like warmly to welcome Mutale in the conda here to the K-State. Oh My goodness, and I don't know about you, but I'm one of these people who I'm always scared that I'm gonna have a party and nobody shows up so I am Very honored and excited for to be here today, so I'm gonna do a couple of things over the next hour I'm gonna introduce you to myself some of the work that I've been doing some of the questions That have been raised and then I'm hopefully gonna get into conversation with you all because that's by far the most Exciting and important part of the lecture for me So if you'll just bear with me whilst you take a journey into my brain and then we can go So the reason I've put up this slide is Public interest technology is a field that the large foundation Specifically coming out of New York and California have been thinking to ground over the last five or so years And the idea is like we have a public interest media system Where the information that you should expect to receive there is fair and balanced and pro-social We have a public sector law and and Public sector public interest law sector where those that cannot afford representation can get it for free that as we move forward into What we are in now, which is an information age and AI being only one of the tools That will use in that information age You're gonna have people that look at technological development governance and but do it within the public interest and the Interesting thing for me is I've worked in this field around 15 years before it had that name and before AI was AI To because it's based on computing principle and computing principles. So there are older But the reason I really want to start here was to really ground the conversation in the idea that these technologies are Exciting and they're disruptive and there is no reason that they need to be harmful as well even though I know that that's what a lot of the A lot of the chat is particularly in the media. So Introduction I'm gonna go through a case study of my current work unifying ideas Where algorithms come in so there is gonna be a little bit of engineering in this But if you don't like math and you hate engineering, it's not gonna be terrible And if you do like math and you love engineering, it's gonna be boring but if you're somewhere in the middle you'll stay with me and then lastly to go back to this grounding idea that Technology like any other field can be in the public interest So who am I I sprang up in what is now the field of responsible AI around? 2019 which I always call before times because I feel like before COVID it's probably like before Christ like what happened then we can't remember it was so long ago and and I Wrote this paper along with sociologist Jesse Daniels He's at City University, New York and computer scientist direction mayor and what we were really looking to do in this paper Was to try and understand why was technology so persistently racist and what could we do to make it less so and So back in those heady days of being an idiot We were like why don't we just do a little report in a checklist and I won't be racist anymore It still is we'll get into that later But what this did was put at least put out into the marketplace that technology Can and should be better if you have lots of different people thinking about how that technology is developed And even though I don't speak about DEI Exclusively or specifically in this talk you will keep hearing me make out these invitations for absolutely everybody in this room to be involved so Two of the foundations that read my report were the Ford Foundation and the Carthage Foundation I never heard of either. That's another consistent theme. I never really know what's going on I'm always like wait. What's that and they basically said would you like to start an organization? And I said as a joke I would love to start an organization because the way that these engineers are The way that these technologies are engineered are not for people and they said to me You know, well, what would you call it? And I said hi for the people as a joke and they were like that's great And then they gave me a check and I was like that is great So I now run a non-profit And I was just you know alone kind of screaming into the wilderness telling people not to make technology racist and then Nasdaq We're like we think you're doing great work that VCs of all people Really want to integrate into their investment portfolio. Will you come to Nasdaq? I live in New York City and even though I knew what Nasdaq was and I knew that it had a bell And I knew that I would have a really big picture on the side of I just seem to ignore that and send the first picture That came up, you know when you when you attach which is this one But it was just so strange to be in a situation where you're ringing a bell to tell a bunch of people To make less money so that they can make better technology, but it happened to me it can happen for you all That is America. So I do Advise a number of companies tick tock being one I can definitely speak to you about that poor tick tock I hope it works out for them, but I tried and Google is another one Which you may have heard of and I'll speak about that later. But the interesting thing about Google was And initially when I kind of came into the Google world, they were really mad that I was saying racism is technology now They're not so they're back. So that's good and one of the things that is Really important for the students in this room and the non students is that we are in a knowledge economy So we have to keep thinking so I recently find out that I will be starting a PhD at Cambridge in the fall, which is hilarious to me because I've been to a bunch of universities But in this particular university you join a college and a university So I nearly didn't get in because they offered me the place and then I ignored the emails And then they were like no you need to get come to a college like and I was like wait for real Okay, so I'll be doing that and I can certainly talk about that for other people who consider themselves to be lifelong learners and Really the economy that we're going towards so What I'm gonna do is just and take you through a case study of the type of work that I do and in public interest tech and I'm specifically very interested in working with social media companies and what they call their policy and trust and safety teams So even though I meant to start this talk thanking the library I clearly got nervous and not doing that but thank you to all the library staff I really appreciate you but what I appreciate them more than the invitation to come here are some of the questions that they need to ask about information integrity Constantly that same work is done within social media companies And when we think about that work being done in social media companies We're thinking about is the information true is the information harmful How can we make sure that it spreads? How can we make sure that it doesn't spread and one of the things that I certainly did on January 6 as an American was Number one. I could not believe what was happening. I really did think it was a movie for some reason but number two I looked at the crowd and I said I knew three things were happening I knew that there had been social media organizing because that's one of the principal ways that we're exchanging ideas I knew that Obviously, this is a historical moment as you all knew but as I looked into the crowd it was I was like, oh, they're no black people here Okay, nothing to do with me Not really that did not turn out to be the case and the case study I'm gonna take you through is how algorithmically even the things that you think you do know can be changed in those Environments through decisions that we make or we don't make around information So the first person that really came to me as I was looking at January 6 and I was trying to think What would inspire this group of Americans to make these these series sets of decisions? And why who were the Americans and why weren't other Americans there and Enrico Tario came up and the thing that's really interesting about Enrico Tario is this guy in the middle with the the Vest on is He is Cuban a man. He's Afro Cuban He was the chairman of the proud boys and if you don't know who the proud boys were I Mean, it's all over the news, but highlights street gang Just don't do it kids like just don't do it not good But in order to be a proud boy One of the things that you had to do was say the white supremacy was the supreme law of the land And I was like, well, that's great for you if you're a proud boy, and if that's what you want to do But Enrique really why are you there? So what was really interesting about him is that he had through these social and technical Interactions been able to make this leap. He ultimately got sentenced for 22 years in federal prison for mastermindings seditious conspiracy for his role that day and Even at sentencing he completely used Contestable information as a defense The next thing that made me really interesting as from a social media perspective I was looking at well How are we creating truth and who were the people behind this? I came across this guy Ali Alexander who is a fascinating creature? Definitely. I shouldn't say fascinating man I'm not being rude about this man. Um, but Prior to this he did something that I recommend nobody here does he went to social media to admit that he was Committing crimes now if you are criminally inclined This is an advice. This is just for me to you human to human Don't go to social media and tell us because you will be investigated But he made a series of videos and all of which were variations of I created the stop the stale hashtag It is mine. There is gonna be this great March And if anybody tries to say that they did this know that I did it Hmm the problem with that was When he was talking about creating a hashtag what that actually means in engineering terms We think of a hashtag from what we see on the outside of the website So you see the you see the pound sign and then a phrase Algorithmically what happens is that that phrase is? Translated into a line of code and then those particular markers are used by people Like people like me. I told you I'm getting this Degree in digital humanities, but then people like me would then use that unique code to start to understand Who is using this? Who are they following? What actions do they take? Where do they live? What do they like and we build these statistical models of the type of people that would engage in this and Another really famous hashtag that I'm sure we all know is black lives matters So it would be that same use of a hashtag that has this social meaning, but this digital footprint He actually wasn't at the riot Of January 6th, but he was asked to testify in front of the house because he had very kindly told us That this was all him only him if anybody asked about me it was me. So that was very nice and But what was really interesting in both these two case studies is that these were leaders of a Movement that was not largely black, but they were black So this really set off my wheel spinning because I was interested to find out What was going on from an AI perspective? What was going on algorithmically that would that would drive these people to? The you know to this fate So we're gonna watch a really really small part of this video if you're interested I definitely recommend that you look at it, but it's the lead-up that kind of underscores What I'm saying because this is a huge question across the humanities across the digital humanities and within industry Particularly as we prepare Again into a study that's happened about the January 6th capital Insurrection of the nearly 400 American rioters who arrested or charged 93 percent a white and 86 percent a male that's according to the Chicago project on security and threats Here's Michelle Martin speaking to the study's principal investigator Professor Robert Pape at the University of Chicago with some surprising revelations about the attackers and their motives Professor Robert Pape. Thank you so much for joining us. Thank you for having me Now you said from the beginning it's important to understand Who stormed the capital on January 6th and by that you don't just mean for the purpose of bringing criminal charges You've analyzed information about more than 400 people who've been arrested so far And I'm being sort of vague about the number because the number changes, you know every day You know some of the people have been in the news like the guy with the fur hat You know the guy putting his feet on the speaker's desk But you saw that there are patterns the first thing some things that anybody who was actually watching the thing unfold could see overwhelmingly white overwhelmingly male We could also see that they're older. What was the kind of overall Profile of the person you got involved What's so dark in this case? Oh wait two-thirds of Oh No, okay, he does great work, but we're not gonna get into it right now because when you got an hour but What what is really interesting about that is as we as we think about how those people get to January January 6th And we don't we He picks out a number of ideas which which we then able to see in the data So the first idea that papery and picks out is No matter who stormed the capital that day in his surveying They all had a negative view of immigrants in their view people that were emigrated to this country whether it be legally or otherwise extra legally and We're stealing there was something really Negative about this and that was one of the reasons that drove them to DC that day because they really feared for their future And I would encourage that we do really sit in the compassion of that because there is nothing worse than feeling scared And you don't feel in control and this is the way this was one of the ways that people were getting their agency the second thing and this is kind of where my interest comes in is The places in which they were expressing this they weren't just saying it to each other or meeting each other in Various places that were going online and as we think about our new online environment specifically after Chat GTP and the invention of that and what that technology does is that it will take an idea that you've given it and then build on that idea What would it mean in 2024 for that same thing to happen in a post chat? GTP environment and the reason that that's really interesting to me is that you would effectively Potentially have the machine finishing your thought. So if you came to it with Anti-immigrant ideas, what would it say what would be the impact of this and what happens when this happens at scale? Because even though we're the only ones interacting with the technology the messages that go out become globalized messages the larger theoretical drivers something called the great replacement theory which is Where a lot of my research actually lies and the really fascinating thing about the great Replacement theory is that when we look to the data you can switch it out the reason that an entire an entire and Rikki Tario can show up as a proud boy Is that they can switch off the race part and switch on something else the reason that Ali? Alexander was so comfortable in his role with steep stop the steel was that his if you looked at his interviews, it was really about jobs and the thing that makes it so Permanent and scary if this is the way that we're going to be using algorithmic systems like social media is that we're in a massive Moment of economic and social transformation in this country. I was so excited to hear that you're a land-grant institution because if we look at the land grant program after the Civil War It was really developed to provide the new knowledge needed as the United States was moving away from being an agrarian Society and becoming an industrialized society We're in that same type of moment now except we're moving away from being a manufacturing and service-based economy and Towards a knowledge-based economy and so these same fears around people are taking our jobs I'm not going to have a future Being with technologies that can really reinforce some Negative or toxic conversations coming online are really scary at least to people like me So We were looking into that data and I just could not get out of my mind that this the leaders of this of this Kind of stopped this till January 6th Some of those leaders were black men and we I went to look at a survey by group called the public religion Research Institute. I pretty much don't think that I put their right acronyms on there But okay, and the reason for that is part of my work really intersects with religious studies And they've done the survey in 2022 around the grace replacement theory where they were asking if people agreed or Disagreed with this idea that immigrants were invading the country They found that Americans that trusted new sources 80 percent of respondents felt that way Q and on believers 65 percent Republican 60 percent Democrats 11 percent independence 26 the reason that that is really Significant to me is that it shows you that Across the political spectrum this fear that something is being taken from us and exist And that's mainstream, but when we think about what this belief does and on the extremities This is a picture of Dylan Roof who in 2015 shot nine African-American people while they were at Bible study. He was completely radicalized online This is actually a picture that was taken from his Facebook page. It wasn't removed I can speak about the ethics of that later another person that really really believed in this great replacement theory is This guy that we've already referenced here with the horns and he spoke He considered himself the Q and on shaman and after storming the capital actually went to prayer in the dais and thanked Called Jesus's name and was like this is you know, no sure. I'm here. We're doing it again algorithmically Radicalized and this this young man paid I I may say his name incorrectly So do forgive me, but Peyton Graydon was the man that went to Buffalo, New York and shot 10 people while they were shopping and Again, this idea of the great replacement was there. What is really key about the three of these is that all of this happened on line This is not people in their communities telling them this this is not if you looked at their FBI Track all of this stuff happened online So in the slide before where I was showing you 60% of these people 30% of those people that first slide that was us It was people like us that were surveyed, but this is what happens when toxic Messaging gets into large language models and then it gets to the extremes So why is this a responsible AI problem? So going back to this idea that technology can be done in the public interest Going back a little bit into my career and what I've looked at I've been really interested in thinking from an engineering perspective and from a social science perspective How do we maintain free speech and maintain? political Balance online while at the same time Protecting from these messages that we know to be toxic So on the left of the screen if you do not know this book you should certainly know this book Sophia Noble is a professor at UCLA She is the winner of the 2021 MacArthur genius award for this book algorithms of oppression And the key takeaway from this book is that when we think about online search We are searching Advertising platforms. We are not searching for knowledge. So I'm sure my library Professionals are really happy that I'm saying this, but it's true. She's an infant. She's also an information scientist. That's her PhD But in algorithms of oppression. She was able to show how through search engine Optimization particular search terms will bring up toxic results in her particular case study She was she was looking at the way young women are looked at online all the news is bad By the way the top line I don't need to ruin it for you But it's particularly bad if you're a black young woman. It's particularly bad if you're an Asian young woman And she really goes down to the data This next picture, which I couldn't get to be clear is the reason for that to happen So a lot of my work really looks at thinking how how can we use recommendation? Algorithms to not push us into these darker parts of the internet that make being online a dangerous place at Dangerous place at worst and a place that sucks at best so an algorithm is A software program is proprietary and it gives technologies the ability to either Make predictions or make decisions and that ability to for a technology to make a Conviction or to make a decision is what makes it artificially intelligent So when you think about the when you think about the term AI You should also think about the fact that you are thinking about a machine that can engage in a cognitive task So if people have voice assistants, they're often telling their voice assistants. I don't know why you all are doing this I clearly am a tech phobe and live behind a tree and Only speak to people in person and write notes because of my work But many people are saying to their voice assistant pay for this Show me this song Give me this and the reason it's able to do that is because of this engineering process these algorithms The same for people who are opening their phones with their faces another crazy thing keep your face to yourself It's very valuable. Don't do it And and other all these small ways that we're using AI and training AI in in our lives every day If you're liking things on social media That's another way that you are engaged in with an AI system and you are actually training and making that system better So if you're the user this person here What you would do is that you go onto your phone and you're like I like this picture. I like oh, no Oh, no, I like this picture. I like this movie. Show me this And what that does is go into this black box Which is a recommendation system and it starts to make a model of you and it says metallic is the type of person that watched love is blind that really likes pizza from this particular place and Drives a Prius What will happen is that becomes a statistical model of the type of person I am and what is fed back to me are ads of Priuses if you ever done this where you've gone online And you were like, how do they know I how did Facebook? No, I love hugs. You're being tracked so And it will feed back to me ads and messages a line with my wants and needs It could also feed back to me Things like immigrants are bad. You better go get to I mean my friend Ali Alexander It could be like, oh look stop the steal. Look at this convenient Hey, this is where all your friends are gonna be January 6th and it can lead to negative behaviors This is happening at scale and what I mean is everybody all over the world is doing this all of the time and it's pushing us into Sectors of the internet where everybody looks feels says what we do which Sounds utopian, but in actual fact The way that human beings the not the way that natural intelligence is different from human intelligence is that we have values We can apply compassion We can apply notions of justice We can also say no Algorithmic recommendation systems are automatic decisions are made in a flash One of the key important factors of decision-making is time and the reason the artificial intelligence and artificial artificial elite intelligence systems will never supersede human intelligence and at best in my view should be tools We should be it should be People humanity working with machines not not machines working on behalf of humanity is Because if you are making decisions devoid of What it means to be a human being you're gonna make a bunch of bad decisions and a few slides ago and rekatario It's definitely somebody who got into the chat rooms. He got into the group chat went crazy 22 years in federal prison. So don't be that guy now teams like mine would look at those case studies and we would think about as practitioners of Responsible AI What could have happened differently? To De-radicalize people and I'm only using tario and alley is that they're really good examples of people that worked against their own Interest they were leaders of white supremacist movements while being black crazy so what we would look at was is Is it our is the system fair? Is the system reliable and safe and what we mean by that is If those conditions were to happen again 2024 is coming up would it replicate? Are these systems privacy protecting and secure? I made a joke about Siri But the truth of it is you actually don't know where your preference data is going. You don't know where it's being sold to There's a really good example of I'm doing some reading for another project where they were looking at a homelessness Algorithm that was built to get people off the street and in order for Unhoused people to use this they had to get consent To their data being shared with 168 agencies one of which two of which were the FBI and their local police But if you don't have a house Listen the FBI are probably already high FBI like they already know about me I'm all over the internet It is what it is But if I if it was a difference between having a house and not having a house That's not a fair question to us. We would ask around privacy and security We're always going to ask about inclusivity And the reason for that is if you do read professor nobles book, which I recommend One of the key interventions for making recommendations systems better Is making sure that we're all involved in that process And her argument is if more women were involved in developing online search Then maybe more girls would be safe on the internet because there should be somebody in that room advocating for that group We would look at transparency, which we don't have None of us actually know how the google algorithm works how the tiktok algorithm works And that's because that's proprietary information. I can certainly speak about that later. That's a big question within the legal community And then is it accountable And a lot of the work that I do is in policy And one of the things that we're always thinking about is can you hold a machine accountable for Insighting a riot if it was algorithmic Recommendation systems and at the un level That's one of the big questions being asked by The rehinga people who were massacred in 2018 And the way in which the facebook recommender worked was that it whipped up all of this anti Rehinga sentiment much like the great replacement theory stuff that we've discussed The Report that I alluded to earlier, which was my checklist Has been a constant in my own work And one of the things that one of the things that I always think about no matter what algorithmic system I'm looking at is My my concern are black american communities online. I am a black person living in america I really want to be safe online I do sometimes tip on to the internet and I don't want Anybody's mess and I and I'm not going to fight with people I do not fight with strangers online if that's you stop that that's dumb But I also don't want to be driven to that point and the the lens that I use is always this three part These three questions When I look at an algorithmic system, I think to myself Has this system been designed with the knowledge That we Everything that we everything that goes on the united state is inculcated with our history and part of that is a history of race and racism An example if that was when chat g tp3 Was released. I wrote an article I can't remember the outlet But the about the knowledge gaps within chat g tp3 because I had asked Chat gtp3 How did besie jackson? and influence mahalia i'm sorry how did besie smith influence mahalia jackson Now in my household the places that I know we all know mahalia jackson because we all go to church Which is why when you said good morning, you all were supposed to say well good morning or good afternoon But what was really interesting was chat gtp did three things to let me know that they it was not designed with histories of race besie jackson was the first person in american history to ever get a recording contract in 1920 it was $1,500 it was columbia records and the music that she wrote has influenced elvis presley janice joplin um I'm trying to think and now i'm figured now i'm chat gtp how embarrassing but i know she influenced those two as well as uh, Beyonce And so to not know Who besie smith is and say you know about music is really like in this era to say You only know taylor swift, but you'd never seen Beyonce you're clearly crazy because they're all over everything But the question that I asked was about influence. I asked about her power the response I got was She was born in this day She did this She didn't do the other So then I wrote to the people at open ai and said Why don't you know and they didn't say anything about mahalia jackson who's an amazing gospel singer And my question was why don't you know about the power and the influence that women have had in american society specifically black women And they wrote back to me and said we did not train our data set to see women as powerful And I said you should So that was an example of a cognitive hole within chat gtp A technology that we think knows everything and it didn't know that The other thing that you've got to think about when you're think when you are Thinking through a racially literate lens is how do we even have this conversation? particularly in an environment where there is an immense backlash Against anything that's progressive, but certainly diversity What's it? Equity, thank you. Yes Def certainly diversity equity inclusion and belonging. I should now i'm and rike, but The reason that that becomes really hard Is those Are those are the ways in which we can really open up that conversation? Those are the ways that we can ask why isn't this person in your lab How what is it that why wouldn't you think that these people are intelligent? Why why wouldn't you contribute even? I've been speaking to the team that brought me here today I live in new york city and i'm not going to tell I speak long So i'm not going to tell you this crazy story about me coming from new york city to manhattan Today it was insane, but one of the things that it pointed out to me was I had to take Two planes. I had to have a four hour layover And then the plane didn't the second plane wisconsin airlines. Yes. I said you it's been recorded whatever it was true um Didn't have a tracker And I was like I have never been on a plane to san francisco where it doesn't have it like what Why is this so difficult? And the truth of it is it's difficult because people that live in cancels are not thought of being part of the knowledge economy How dumb does that sound? How dumb does that sound and the reason I say how dumb does that sound was that I was in casual conversation And now i'm finding all of these things. I did not know about agricultural tech And I was like well if we're going to have national stability, we're going to need food So we probably should speak to the people in hanses because they seem to know a lot over here at k u About that particular subject But in the environments and in the rooms i'm in whether it's dc whether i'm in san francisco Whether i'm in la whether i'm in london whether i'm in paris whether i'm in all of these places They're never thinking about regional voices They're never thinking about the people that live in our country that have innate knowledge that are not in the conversations So when I think about dei i'm also not just thinking about race I'm thinking about everybody who's excluded from every single conversation and has the right to be in it And then the last thing and this is my call to you all Is once you know That you're living in an environment In which some people are valued and some people are not my passion my interest Is in making sure that black people specifically black feminized people are valued within their spaces Your passion and your interest might be something else And I encourage it to be something else because every single person has to do the work of progress is my view Once you know that and you're going to make sure that those people show up in those spaces Once you know that this is going to be a difficult conversation So you're going to have to build alliances and allegiances around difference To make sure that there is inclusion and make sure that allies are part of that I've spent probably about 30 minutes talking to you How can ksu contribute? What is unique about your community? What is unique about you? What is special about these students in this geography at this time? And how can you make sure that you're part of the conversation? Because I have had a long illustrious career. I am blessed beyond measure and it's only getting better Nobody ever invited me to any of those parties and I didn't care. I showed up I took space and one of the reasons that I took space was to make space So I'm hoping that as we get into this discussion and I can speak more about the engineering I can speak less about the engineering That you really think of yourselves as being part of an economy In which we want to maintain our humanity But get maximum benefit from ai projects whether Is you know, obviously I'm known for being a debbie diner. So obviously I brought the january 6th Example, but there are good examples which I can show you that whether we are thinking about communications technologies food Technologies whether we're thinking about the ways in which we learn the ways in which we the the ways in which we congregate ai systems are going to be used as tools in every single one of those domains And the thing that I wouldn't want any of you to think is that you sit outside of that economy and that growth Just because people don't think you are there Because nobody thought that I was I should be there either And at least for the next academic year I'm going to be a fellow at oxford and I'm going to be a student at Cambridge And I will be the first person in history to hold those accolades at the same time So Thank you So that is my final slide If you didn't take anything away if you didn't take anything else away Just know that we created these technologies And so we have the power to make sure that they work for us because ai certainly should be for the people Okay, thank you so much for for your Presentation we have time for a few questions We have a couple of mics coming around jason is over there at the back and you're going to try to Answer any questions? All right Hi Can you hear me from hill? Yes I'm a chemical engineering phd student and i'm actively working on ai systems for agricultural food systems And I have a question regarding agi which is going to be the next generation of ai That is going to eliminate the dependence of ai on human data So how do you think that will affect the whole algorithmic bias of controlling societies and more I don't think that agi First of all, I think much of the claims around ai are hype And I would really love to see an architectural A technical architecture of what it means to have a non-human data set The systems that I look at and the systems that I study are around A computer human interactions. So since you're a food person Are you thinking about agi in food systems because I know in biomedical spheres, for example They still do use human data sets, but the impact of the ai isn't as It doesn't interrupt fundamental rights. So the second part of your question is how do you think this will impact Human bias or algorithmic bias? I think I don't think the invention of agi is going to stop racism So I think I'll be fine. I'll have to have steps to do. Thank you Hi, good afternoon. My name is Jane Turner. I'm a senior mechanical engineering. First of all, I'd like to thank you for being here It was a fabulous talk and I really appreciate how you were able to talk about these very difficult subjects And an impartial way and with compassion Um, that's a really especially with these very hard topics. We hit something that's very very challenging and so This is a little more pertinent to your subject matter expertise Sorry, I had to write it down. So I remember um, so something you talked about earlier in the talk is um, how these radical ideas, um, the great replacement or the, um Oh, why can't I remember this year the this union group we mentioned earlier and I feel bad for not remembering their names But I remember when it was happening. It was very distressing and, um How those initial ideas, you know, probably were generally introduced to this sphere the online sphere by humans and then AI, um grew it and cultivated it and you know, um people chose to make actions on these um online environments, so one how Do we do we want is it like my question is how Few people how many like very radicalized people do you really need to like kind of like set off an idea? Like a like a seed like how much catalyst do you need? And are we also concerned and then the second question is are we concerned about a eyes themselves introducing these ideas and then That's a great question. So i'm just going to go Back to this slide which gets to the first part of your question The great replacement theory whether you agree with it or disagree with it Only remains in the theoretical realm Even within the 80 percent that believe it the 11 percent the 26 percent when it's not acted upon I think the the I don't believe in banning or pushing or banning people from social media. I do not believe in social media cancellation I think that we we live in a free speech environment. And so that's not appropriate However, we can degrade um, my view would be we can um, kind of degrade or or Just not allow dangerous ideas to rise to the top of For you feeds if you're in tiktok or or or people's home pages Because ideas ideas when they're not acted upon are just ideas the issue I think is the environment that those ideas create for the targeted groups And if we are going to be a country that is open and inclusive as Demographics change then We potentially do not want to make Ideas like the great replacement theory just kind of be standard cause a big thing about Dr. Noble's book algorithms of oppression was that she she was making this argument that We need online environments that welcome young women because young women are not only online But there are more young women actually being educated in universities Which means women are much more likely to be part of the knowledge economy And so they need to be able to research and do all of these wonderful things online And still not be Not be attacked. So I think the first part is A theory is a theory until it's acted upon but it only takes one person to kill Nine people They had their eyes closed. They were at bible study. They were unarmed They said that they loved him before They killed him and i'm not going to do all the other clicking because all of the people that I mentioned Did horrible terrible things particularly the one whose Who Breaks my heart like all of these case studies of extremism break my heart because They don't only target innocent people, but they also ruin that person's life And the last example of the guy that went to the supermarket. He drove 200 miles He killed people in their 80s. He was 18 I have a 17 year old daughter who's about to go to college and she's definitely an idiot I love her but her decision making is not there But she's a kind and she's a good person and she would never Hurt anybody and what what has gone wrong, you know in those people's lives So when you say how many extremists for me only takes one and it makes me wonder What could um I'm a huge reader So it I'm thinking of the play Um Journey of a salesman. I think I might have got where they all contributed to the death And it's like what could we have done as the society to hold these people? I haven't really talked about gender in this talk, but in those initial slides It was men. How is masculine teeping is constructed? How can we pull men in? as Jobs are you know jobs are going the economy is crazy after covet um So what can be done? I think society needs to make shifts and then from a product design process Which was another part of your question. We need hard lines I think one of the things that social media companies do a lot as we're thinking about information policy Is that we ask ourselves in our teams? What are our red lines? What are the things that we absolutely do not want to show up How do we How do we identify that? How do we encourage it? How do we capture it? How do we because I mean and just because I don't believe in banning people from the internet and I mean Not banning anybody including the people I don't agree with I just don't I just don't think that freedom of speech means freedom of reach I think the toxic Dialogue should not should be rooted out of large language models and that's a very human rights frame So it's complicated and it's difficult, but I think keeping people safe and Just going to this slide if you think about responsible AI like making sure that Our platforms, whatever they are safe as well as Inclusive and as well as if people are harmed this accountability Is a really difficult dance Which is why we need as many people in those conversations as possible Because there are things I'm not looking at that that are worthy of attention I haven't even I haven't spoken one word about accessibility and I certainly I'm somebody who believes that Every single person at every single ability level should have access to the promise of these tools Hi, my name is Ian Slater. I work in it here at the library. I'm actually spearheading some of the AI work that we're doing here Thank you So so my question to you is what if we get to the end of this and we realize that AI as a technology Simply is not good enough to do anything other than reflect back at us our sort of worst most primal urges How do we keep an eye out for that? What do we do if we come to that realization? I think well, first of all, that's social media is one use case So I do think that AI systems are incredibly Important and powerful particularly when we're thinking about sorting large amounts of information that we could never I mean, I like to read but I can never read at the breadth and then sort the information In a way that a system Could so it's two things. I think we need to think about use case Where do we use AI? Where we where do we not use AI? And what I mean by that is do we need An automated prediction here. Do we need an automated decision here? Is the first thing number two, where is the human in that loop? I'm sure you've all seen the news of The gemini the the google where it was like the asian and the black nazis again I was like nazis nothing to do with me and then they come up with I was like, how did I get into this? nothing to do with me, but um What happened in that particular team was that they had automated a system that statistically You could have an outcome of a prompt Being equal so and they were solving for the problem of The prompt of what does a doctor look like what does a professor look like and it's always like this So is ted and it's like why is ted always showing up like brad isn't going to get in here somebody with dark hair And and what it did was somebody asked What does the nazi look like and that had just never been tested for had there been a human in the loop Had they actually red teamed it and thought what are the worst questions that we could possibly ask this? How can we push the system to its limit and then how can we then engineer for it? But get humans to tag that we wouldn't have seen that you know, we wouldn't have seen that Um, and so that again goes to who's in the room inclusivity Who's asking these questions? And then the third thing I would say is ai should be used in low stake situations What I mean by that is if it's a situation where your rights are not going to be impacted. So for example in biomedical Settings i'm seeing really exciting things about Papers and medicine being processed by ai systems and they're looking for cures for cancer So they're trying to figure out what are the warning signs? What should we be looking at and then that's being presented and that analysis is being used for cures I absolutely think that that should continue and I don't know anything about food systems But I'm assuming if you're using ai to make sure that we have good High quality food and that we will not be a hungry nation in a hundred years 50 years That feels like a really a really good use to me Thank you We have time for two more questions. We have one question online We are going to go with that one last and we have one more question here in the room Hi, I'm Laverne Bitsy Baldwin. I'm director for the multicultural engineering program here at K-State And I first want to say thank you for saying I'm gonna get emotional again I showed up it took space I think that's so important for women and multicultural students in this area. So say it every time I encourage you, but I do have a question You have the slide with the user The clicks and then the algorithm in the black box. But you also said Time is a key factor in the recommendation system. Can you say more time about so for instance, I I see something and I'm like I can't believe what I just read. So I do it again I watch that video again Does that affect Things that are out there or is what do you mean by time is a is a key factor. So I was comparing human intelligence to artificial intelligence and how When we engineer these systems, we are engineering them to make quick decisions And the reason that we're engineering them to make quick decisions is that if there is something harmful if there is a threat You see this with live stream a lot I'm thinking of much of my academic research is looking at Disasters online honestly, like which is why all my examples are like, yes when I was thinking about that massacre I I really do have a fun life. That's just my work life But in the rest of my life, I'm really a fun person But I do know a lot about like shootings online too But what they'll do with live stream is that that can be that can be switched off within five seconds And that's a very quick decision and we we need that, right? We need to interrupt that young people are on these platforms. We do not want them to see this stuff However, if the algorithm that you're using is deciding whether somebody should go to jail or not That is a question that in my view deserves much deeper deliberation deserves time deserves consideration and Though algorithms are being used in that way, but if we think about The way the supreme court for example will look at cases They'll sometimes accept a case in october and we won't hear the decision decision till june They've taken time. They've looked at the evidence. They've looked at the case law. They've looked at the constitution They've made that decision. So that's what I was talking about and the thing that makes it so dangerous Is that we don't know what happens in that black box? We just know that a decision is made and At least people like me. I'm always looking at the harm. I'm never In my work. I never look at what went right. I'm always trying to stop things going wrong Thank you. Mutale. We have one more question online. So they are going to unmute in room 359 Hello, uh, am I audible? Yes, uh, I'm dr. Asif from physics department. So my question is Um, you said that by design we need to Fix ai such that it doesn't produce racist content But from gemini's experience, we already understand that that's not as simple. So is it better that If we use diverse data sets, which are like handpicked by a panel of people and the second problem that I'm seeing is What if ai becomes too clever that it doesn't show you Instant racism, but it harbors it for a long time And then when right times comes It destroys the society. So No, I mean those are great questions and those are honestly things that are being spoken about right now, particularly with Jen and the way gen works The first thing I would say is that I was definitely being aspirational Um, I don't I also want to caution us from not thinking that ai is racist ai is a tool Is a bunch of x's and o's the ways that it's used the ways that it's applied Um are or what give us those those results But I would say that in those instances and gemini is a really good example of where there was no human in the loop I think just having a history person on that design team who was involved in the testing and evaluation stage They would have just said there is no such thing as an asian nazi. This didn't happen. This was arian You can even raise up to that position. No, no less the black person One person that new history could could have said that so that's what I think about when I'm thinking about inclusivity and it does happen In industry that you'll have a multidisciplinary team. So for example I've been in product teams for over a decade and I'm a social scientist, right? So there is place for that The second part of your question is what happens if it harbors harbors racism The way that racism is often being shown up in data sets is through what's called proxies So I've discussed really obvious examples for the case of this talk But you'll find that if you use something like zip code data as an input That will also have a discriminatory output because zip code data is shaped by the racist history of redlining And racial covenants and legal decisions, which are nothing to do with the technology that have shaped and reshaped our environment And so and I believe that if there are humans in the loop and if you're looking at testing and evaluation With multimodal parameters and inputs You are going to be able to stop and identify some of that harboring But some of it is it's going to happen and then you're going to have to pull the switch Which is why use case is so important. I am definitely somebody that believes that AI is an amazing technology as quantum will be as general intelligence as as as However, I think human decision making human intelligence should be what guides How we're thinking about these technologies and where we use them Very good talk to close today. Please join us in in giving a round of applause to Thank you Thank you We have a reception just outside here. Thank you again for for joining us for the lecture series. Thank you