 Hi, and welcome everybody. Thank you for joining us. We are also sort of simulcasting online on a Zoom event. So hello to our friends on Zoom. Thank you to our Stellar Library AV team, which is making this happen. It's very cool. Very, very cool. My name is Kate Epler. I am the digital equity manager here at San Francisco Public Library. And today is day five of SF Tech Week. Cheers. If you're interested in AI, I want to let you know about two events that are coming up on Saturday. One is going to be at 1.30 in the Latino Hispanic room downstairs, AI and Robotics, a multimedia talk. And then we're going to have some hands-on chances to play with robots and program them using AI later that afternoon. Tech Week program guides are over on the table. Please don't forget to grab one if you don't already have one. And now it is time for my profound gratitude for Scott Mulvay, longtime friend of SF Tech Week, formerly the director of AI and Global Partnerships for Microsoft Philanthropies and Microsoft Cities, serves on the national boards of Upwardly Global, the Urban Age Institute, and Co-Owns the Well. And I'm excited he could join us today. Thank you for making the time. I'm going to turn it over to you. Alrighty. Thank you for having me. I paced a bunch, so I'm not good at standing in one place. I'm probably not at the podium a whole lot. But thanks for coming out and spending some time. What I want to do is give a talk that gives an arc of the challenges we see with AI and data, some of the good scenarios around that where we can really drive a lot of positive change, and also take a look at some of the societal challenges that AI might make worse that we're going to need to address if we're going to look for how we keep society rolling forward. So that's the three buckets of things that we're going to talk about. First, we're going to talk about some of the challenges in data, talk about the good, talk about the challenges. So let's get into the bad around it. So we're going to talk really through how bias gets into data. We talk a lot about the problems with bias in data, but I don't think we've had a lot of good discussions about the sources of that bias. So I'm going to walk through some of them. So one is just reality is biased. So if you go out and you do a search for CEO, you get a lot of middle-aged white guys, which is problematic because it doesn't give you a view of who could be a CEO. Well, the challenge is that's kind of reality. This is a gender and racial background of CEOs. So you do a search. You say, show me photos of CEOs. And you get something that looks like that. So the good news here is, so I took this shot in 2019 or so. And you now see that search engines are curating the results a bit. They are doing, I don't know how they're doing it, but I imagine they're using some statistical sampling similar to polling, where you can overwrite underrepresented groups and de-weight overrepresented groups who fix up the call and answers the phone says who they're going to vote for. I don't know. That's my speculation. And you also see, you might not be able to see it, particularly when it's small, but they also prompt you for other things. Maybe you want to search for women CEOs. Maybe you want to search for Fortune 500 CEOs. So there's one of the examples there. So that's part of the challenge is when our data sets that we have reflect reality and reality does not reflect our desires. Then another source are our data voids, where we don't have consistent data. So if you go out and do a search for, I think it's a toddler haircut, toddler hairstyle, not a very represented group of toddlers. Let me turn the volume of this off. So it's not like everybody needs a haircut. Everybody get, well, I don't. But similar people get haircuts. There's no racial data. There's no gender data around photos of haircuts. So why do search results look like this? Well, I'm not a parent, but I'm told that mommy blogging and mom fluencers, which I think is the new term for it, is a predominantly upper middle class white activity. And therefore, the people who tend to talk about toddler haircuts and cute toddler haircuts and what you should do tend to look like parents of those. This also works in reverse. So you have cases where you have examples of all that the cases are of one subgroup. But there's also cases where you have voids where the people are just not in the data at all. So one example of this is all the genetics stuff, is largely for white folks. So if you're going to go out and do disease tracking, if you're going to look at connections between behavior and genes, you are only looking at one group of people. And if you are not in that genetic data set, you don't exist. So there's some societal challenges around that. On the humor side, some folks may be familiar with Dan Savage and his redefinition of Santorum away from the senator from Pennsylvania. So look it up if you're curious. It's probably not a conversation for you. But people could gain that system too. Another source of challenges around this is just poor use of training data. So we know that AI, you get a big data set, take a look at it. And the data will then look through and then try and make predictions around what data that it hasn't seen before is. So this is just a mistake of the creators of AI and that they are not good at representing that. So a very common example is facial recognition. So it's gotten better over the years. But for a long time, it was primarily good at identifying white people and not so good at women, not so good at people with darker skin colors. Part of that is technically based. Women tend to wear more makeup. Therefore, sometimes it is more difficult to recognize. That's also just a bit of a cop out because a lot of it is facial geometry, not sort of what you would see from a photo. But it does require people to take an extra step to do a little more diligence around that. But another cause of this is Shirley. So Shirley was an employee of Kodak. And when the government forced Kodak to separate its developing business from its film business, Kodak had to find a way to make all of these labs, development labs across the country, calibrate their development process. So they looked right. So they got an employee named Shirley. And they took photos of Shirley. And then they Kodak developed it so that it would look proper. And then they sent out undeveloped negatives to all of these labs so they would develop it in their process. And they would compare the results. And as soon as Shirley matched, they had a good development process. So over the years, there were a lot of Shirley's. And so in the industry, these are known as Shirley cards. And they all look kind of the same. In fact, the really depressing part of this not only is the sort of limited imagination of the developers of the photography process, but also the only time this really changed what was the kicker around this was furniture manufacturers who were getting poor photos in magazine ads of their oak tabletops and the such. And they said, oh, maybe we need to include some darker images in that. So not only was this sort of film developing process, but the paints that the chemicals that were used, the papers that were used, were really designed to highlight lighter skin tones and be more accurate around that. So even if you calibrated your equipment better by expanding the diversity of the models, the underlying paper technology that was used was still going to have less good results for people with darker skin tones. So this is something that goes back, I think the first ones were in 1958. I don't recall exactly. But so we talk about problems in data. But this predates, this is an old analog process that really has shaped photography over the years. So we don't get good family photos. But there's real world implications of this. So the ACLU a number of years ago took the photos of the members of Congress and ran them against mug shots using one of the big cloud providers matching algorithms. And 28, I had to check the number, came back with 28 hits. Of the 55 members of Congress at the time who are African-American, 28 of them matched mug shots. So if you were a police department and you are doing photo lineups algorithmically, you're going to have a lot more hits. So presumably police departments do a little due diligence. They just don't go arrest people based on matching algorithms. But that means they're going to have a lot bigger pool of people with darker skin to go weed through. It's inefficient, but they're also going to go searching in the backgrounds of those people. They're going to over-police them. And there's going to be all sorts of other negative externalities that come from that. There's an example, ProPublica, did a study of sentencing guidelines. Sorry, pre-trial release, I forget what the proper term is, where they essentially have a risk-soring algorithm for a person who's charged. How likely are they to recommit? And based on that scoring algorithm, judges and parole boards would then use that to determine how likely someone was going to be released before trial. Probably not parole boards, that's after, sorry. And the algorithm is horribly flawed. So not only did it overrate the risk that African-Americans would recommit much higher than the actual percentage when you look at those people who were arrested for probation violations of their release, but they also significantly underestimated the risk to white folks. So they released more of the wrong people and incarcerated more of the right people, or depending on right or wrong. They failed on both sides of that challenge. But for me, the bigger problem and the more damning piece of the challenges around AI and data is it's more than just data. So we talked a lot about how bad data drives bad decisions. We have data that doesn't reflect our aspirations for what society looks like. We have data that doesn't reflect society, and we have bad developers of AI who create training sets that don't match reality. But the bigger problem is a lot of this is feedback is that we are the problem, not the creators, not the data, it's the users of that. So Amy Webb has this great quote, which I love. She's a sort of futurist that does a lot of work around advertising and data, and it says that we are AI study buddy and we are teaching it all of our misogyny and hatred. So some examples. So if you go out and you search for Tanya, the ads that were displayed against that were, go get your high school photos. See what Tanya's doing today. See Tanya's career. What have you? If you search for LaTanya, Tanya being a more common name for white girls, LaTanya, a more common name for black girls, you would get arrest, see her mug shots. Has she been arrested? And so there's nothing nefarious in Google's ad platform that is trying to target ads based on the name of a person. It's feedback, it's auctions. They want to display an ad that will get them the most money. And that is an ad both that has the highest auction price that someone has put up there. I'll pay two cents a click versus a set and a half. And how likely is it to get clicked? So the reason the arrest records bubbled up is that we users were more likely to click on the arrest records for black people than and more likely to pick on the I want to connect with my old friend for white people. So there's nothing in the data. It is how it's used and it's used by all of us. Not some bad cloud developer somewhere. Another example is if you search for three black teenagers versus three white teenagers. Just search for three black teenagers and you get unhappy-looking people. And here you get smiling, happy, ecology-looking people. So we talk about on the auction side of how ad companies want to maximize the revenue, which makes sense. Here they also do that on the search results. If a search result is more likely to be clicked on, that's probably the right one. That's probably what people want. So the good news of both of these, all of these, a lot of them, are these are old-ish examples. So the industry has made progress. So I don't want to sort of, when you go search the three black children, you don't get those results anymore. But that was baked in before. So it takes an effort to go solve that. There's probably lots of bias that we just haven't found the effort yet to do that. So I'm positive about AI. I think it's good. I think it's data. Because when things like these are identified, they get fixed. It's a technical thing. You change your algorithms, you change the waiting, and you get better results. We are really hard to fix. I have unconscious bias that's baked in from living 50-plus years, being bombarded by imagery and stereotypes and storylines that are really hard to overcome. I try really hard at it, but I still find myself jumping to conclusions like, oh my god, that's embarrassing. If you haven't taken it, this is a side from what I talked about. But Harvard has an implicit bias test. We can go out, search for Harvard, implicit bias, and they'll show you some photos and some words, and you have to make some associations. And then it will give you a ranking of your bias. Don't do it when you are unhappy because you will feel very bad afterwards. At least everybody I know. So they have them for age, they have them for weight, for disability, ethnicity, religion. There's a whole scope of things that they have developed this sort of 20-minute, it's probably like a 10-minute test to go through. And you can go take a bunch of them, and most everybody, I will say everyone because I'm sure someone will come up to me and say, oh, I took it, it was fine. Everyone I know thinks they're a horrible person after taking that test. So some examples. So I got a Microsoft example and I've got, I think, a Google example after this, is when facial recognition was identified as being biased, well, I fixed it. So the fix for Microsoft was to go get better training data. They happened to use parliamentary photos of parliamentarians in Africa. It was as simple as going and crawling websites for Uganda and Sudan and Mozambique and saying, who are your elected representatives? They oftentimes have more women in parliament anyway. And they augmented the training data around that. Similar to the Santorum example, neo-Nazis and anti-Semites were poisoning search results. When you search for, did the Holocaust happen, as an example, you'd be pointed to a denier website. And so Google went through and you don't get that anymore. They changed the algorithm so that more trustworthy sites percolated up. So there's lots of problems with data and bias. We need to identify them. We need to advocate for them. We need to call them out. But once they do, most of the big providers of technology want to get that change. They don't want to have algorithms that show disparaging results. So that's sort of the bad end of some data in AI. So I want to spend a little time talking about some positive use cases. So I spent about seven or eight years at Microsoft running a chunk of our AI for good program. So sustainability, accessibility, healthcare. So I worked on a bunch of projects that applied AI to big global changes, challenges. And so I'm going to highlight a handful here in the Q&A. I've got 50 more. I just sort of limited these for ones that I thought would be interesting to this group and would fit 10, 15 minutes at Chit Chat. So one is in the sustainability space around illegal fishing. And identifying illegal fishing is really hard. Because illegal fishing looks just like legal fishing. Other than they catch too much, they catch it in the wrong place, or the stuff that they're not the wrong species. So if you just look at a trawler, you have no idea. And so the impact this has is illegal fishing to plates are food stocks. And 30%, 40% of the population gets its majority of its calories from fish. And I think it's one in eight. Yeah, one in eight people make their livelihood fishing. So more competition undercutting the prices, reducing the supply. So how do we do this? So one way, let me move forward. So here is transponder data for vessels at a particular point in time. So every ship has essentially a GPS transponder that gives off its location. And so what you could go do is you can go look at those transponders and compare that to what that particular vessel is permitted for and see if it matches. So are they permitted for line fishing, but the trolling pattern looks a lot like a net. They're permitted for species A, but they are fishing in areas populated by species B. They are allowed to catch six tons. I don't have no idea how much fish a permit is. I'm making that number up. But they've been out for three weeks and they probably caught a bunch more than that. I'm going to come back to that slide. And I've got ahead of myself. And they also match the same with some satellite imagery. So they've got transponder data. Actually, I'll go through this. So the inputs that they have is they get satellite data. They know the identity of the vessel. They get the transponder information that we talked about. They have regulation that they pull through. They have local information cameras at the side of docks. And they can pull all of this together. And what they essentially do is a merge all this data together and then if you're familiar with a credit score. Your credit score is a 700. It's a 650. It's a 720. What have you? That doesn't tell you whether you're going to pay something back or not. That just gives you a likelihood. Well, they essentially provide a risk score to local authorities to say, here's a boat and it has this percentage of being engaging in illegal fishing. And then the authorities go figure out what to do with it. Maybe if it's a boat that they see a lot, they'll go do an inspection. If it's a one-off where the boat has usually is always a low score and now it's a high score, it's like, OK, so blip the data, not a problem. So I want to give an example of some of the satellite imagery. So one of the examples is they get satellite imagery. They merge it in. And so what this is is when you blow it up, it's a fishing boat potentially offloading to another boat. So this is how they can say, oh, we only caught six tons, but they caught six tons 20 times. So not only is this a fishing problem, but this is a human rights problem. Because I think that's the fishing boat. I don't know which. But a lot of these boats will stay up for years of time with essentially slave labor and never come back to port so these people can never get off. Because they will just stay out there. So this is also a human trafficking issue outside of just a fishing issue. Another example is CSIRO. They're a big science institute in Australia. And there's an endangered turtles. And I will get my number wrong, but in this area there's something like 1,500, maybe even 15,000 kilometers of coastline where the turtles spawn. And they have very, very ingenious wild boars that come up and they dig up the nests and they essentially devastate the turtle hatchlings. So 1,500, 15,000, whatever it is, kilometers of coastline are impossible to walk and protect these nests. So mirroring that up with satellite imagery, able to identify where the particular turtle hatcheries are, whatever the proper term is. And then they can focus the effort on protecting those areas, building barriers, keeping the wild boars out. And that way they went from essentially nearly losing all of the turtles to the next year only losing 30% of the turtles. And they're doing this in cooperation with indigenous rangers who manage these lands. And one of the things that I particularly like about this study is they incorporated not only sort of western scientific models of how to go identify potential nests, but they also brought in a lot of indigenous knowledge. And the only one that I can remember is where we, I, tend to think of things in four seasons. The local populations there look at six seasons of the year. So they weighted the seasonal information on these images much differently than if had I built that. So those are a couple in the sustainability space and I'm gonna throw out a couple talking about accessibility. How do we use AI to make it easier for people with disabilities to engage in daily life? So in China, Starbucks has a handful of flagship stores where they hire people with disabilities and they specifically cater to people with disabilities. And so a couple things that we worked with them on is one is how do you do sign language, text to speech, speech to text. So people who are hearing impaired can communicate with people who are not hearing impaired. So someone can write out a order and have that translated into speech that someone can hear and vice and or things that are in speech can be translated text so you can get the order properly. And the other thing that's really interesting is in cafes, which I didn't know, is a lot of the equipment has sound alarms. When's the sandwich ready? Beep, beep, beep, beep. So if you are a hearing person, you can go multitask. You put the sandwich in for 30 seconds while the microwave warms it up and you can go make a cup of coffee. You can go get, take a payment. There's all sorts of things you can do at the same time. If you're hearing impaired, you have to sit there and watch it for the beep to go off at the timer to count down. So it was instrumented with haptic devices so they would get a vibration when the coffee was ready, when the sandwich was ready, when the whatever was ready. So it's a great way of really increasing the productivity of people with hearing disabilities. The other thing that we're working with their employees on is videotaping them doing sign language because we're trying to build a language model around signing. So there's a lot of people that want to do gloves. There's a lot of really weird ideas around that basically hearing people think through. And it's like, oh, I can use AI to help people that who sign. And it's a much more complicated problem. And so it's not just a word for word. It's really a language model around that. So their employees are signing a bunch of stuff for us. The other problem is that there's a lot of sign language out there. A lot of videos of sign language, you know. Think of all the government meetings. Think of company meetings where they have an interpreter who is in a little box in a video that is signing what someone is saying. The problem with that is it's dead on, I guess the camera here, dead on, nicely lit, and they are professional signers. Most sign language is done at an angle. Videos are not well lit. If I take a video, it's not going to look anything like the video that we have running here because this is professionally done. My video is shaky. It's a weird angle. So we're getting normal human created videos to learn off of. If we didn't, we would be back to the bad training data. If the training data we used was nicely professionally produced signing, it wouldn't work for me walking up and seeing a person in a store and trying to communicate with them just putting up my camera. Open sidewalks. So this is a project to help people route who have disabilities. So cities have spent gobs of money making their public transit accessible. So you can get to on public transit all sorts of places in a wheelchair with crutches with a walker with a white cane. Once you get off the bus, you're kind of screwed. So what this does is we've mapped, you know, given a bus stop, how far on what's the accessible route within I think it was 300 meters. How do you get to a pharmacy? How do you get to a coffee shop? How do you get to the school? How do you get to whatever your destination is? And the problem with most other mapping, you know, when you go take, you know, I'm going to go map to get from here down to Chinatown because I'm going to go have dinner. What essentially happens is we use car maps to figure out how to route the car there. And we go, oh, you're a pedestrian. You can go the wrong way on a one-way street. Pedestrians are essentially second-class citizens to the car routing. So what this does is it actually says, are there curb cuts? Is there even a sidewalk there? Is there a tree growing in the middle of a sidewalk? You know, it's a nicely accessible sidewalk, but there's a big pole in the middle. There's a guy in Chicago that has a great blog of videos of him on the street trying to navigate and coming to cracks in the road, places where sidewalks are under construction, and there's no way to get around that. The other thing this lets you do is it lets you enter in what your constraints are. My wheelchair can go up this steep a hill. I can't deal with cracks. I need a smooth surface. I just can't have any bumps. I'm on crutches. I can't do anything there. Whatever those scenarios are, you can tweak that, and then it will route you the proper way to get from point A to point B as a pedestrian, rather as a first-class citizen, rather than as a second-class citizen to cars. Okay, so we're coming up on the ugly. We're about 35 minutes, and this is probably a 10-minute or so chat here. So start thinking of questions. Tough ones are always good, and then we'll have some time for questions at the end. So at the end of the data piece, I mentioned a little bit about what the hardest pieces are. And the hardest pieces aren't technology. The hardest piece is society. And I think that's what, to me, is the ugly part, because if we want to make changes, there is no sort of global fix, a fix to the algorithm, and it's all better. We have to do the hard work of organizing, the hard work of educating, the hard work of changing people's minds about how they envision society to be less biased. And that's hard. So I'm bringing up just a few thoughts here that are going to be difficult. One is what is our expectation around privacy? When we started off with the web in the mid to late 90s, there was a very clear distinction between our online life and our offline life. You read news studies, articles, you watched cat videos, maybe you ordered some books here and there, but you left all these breadcrumbs, lots of concerns about social media posts and what that would mean for the idea of privacy. Well, in the last five years, and particularly in a post-pandemic world, we've really started to blur that online and offline. I get my food delivered. I get myself delivered in a car to various places. I play more immersive games that track my movement around, menstrual tracking apps. Things that are really merging the real world, the physical world with the online world. And those then start having much different implications for what that means for privacy. We're getting sensors everywhere. I don't know how many sensors are in my phone, but a bazillion. There are metal detectors. There's all sorts of things spewing off data. There's studies where we can now see through walls and track people's movement based on variation in Wi-Fi propagation. So if there's a Wi-Fi access point on the other side and there's a person walking back there, you can track that person really, really well. We can identify people at a distance based on their heart rate, walking down the street by the gate. We can't quite do that through walls, but we'll be able to. So I'm not a privacy is dead and I'm not a privacy hawk person. My question here is we are going to have an evolving view of what is personal, private and off limits and what is in the public purview. And it's going to take a long time to get there. You know, we have a lot of work that's coming out of Europe with the GDPR in the United States. We can't seem to get a privacy policy. If you listen to the Senate hearings or the congressional hearings, I don't know if it's the House or the Senate, where they grilled TikTok and a bunch of other social media companies, particularly around the way they track children's data. It was billed as AI. Those were not AI questions. Those were privacy questions. And the problem is our legislators haven't set the stake for what privacy should be. You know, the industry needs guidance. We society need guidance. We need regulation. We need teeth. We need force to force these companies to treat our privacy properly, but they also need guidance for what's expected. Software algorithms are great at optimization. They make things go faster. They make things go cheaper. They make things go better. But who's optimizing? So we have building management systems that optimize the climate in buildings. Do we optimize for comfort? Do we optimize for energy savings? Do we optimize for green energy? Do we optimize for lack of variation? I don't know. Software can go do all of those, but somebody needs to make the decision for what to go optimize. And you start having scenarios where different people want different things that they're going to optimize for, yet someone has made a decision that Uber is going to optimize for getting me there faster. Well, maybe I want, you know, Uber now has green transit, but maybe I want to use green transit. Maybe I want a prettier drive. That's a sidetrack. I won't go down that way. So the challenge here is twofold. One is software is not good at making value-based trade-offs. It does what it's told. So who does the telling there? So we all kind of mocked, I think it was Sarah Palin, who was talking about, they were arguing against public health care. We were talking about death panels. Well, you have an algorithm. It doesn't rate, you know, it doesn't percolate up to a death panel, but it's going to make decisions for allocating resources. It makes allocating decisions for who goes to jail and who doesn't go to jail with the pro-publica case. And so there's scenarios that we need to think, we need to be very careful about being clear what software optimizes for. So there's, oh, I had a different slide first. So I think this also gets in our sort of post-truth world where we have, different people have completely different realities of what happened. Either they are patriots protecting our election from fraud or they're insurrectionists. And people have their own realities around that and that's being worsened by people, one, social media companies and how they create filter bubbles. But they're also worsened by us that people tend to know people who share their political views. Most people don't, outside of what the news media tells you, what your social media tells you, most people don't have a lot of input from people who are different themselves. San Francisco, you know, it is, I think there's like three Republicans left in the city, not entirely true. But it is, we are self-selecting into groups that have different realities. I talked about our legislators getting stuck. Well, part of the problem is, any legislator is more likely to be primary, to lose in a primary cycle to someone more extreme than their views than they are to lose to someone of the other party. You know, here in San Francisco, a lot of my friends complained that Nancy Pelosi was too much of a protector of institutions, too right-wing, and there were legitimate left-wing, farther left, challengers to her. All of my friends in the rest of the country think Nancy Pelosi is a left-wing nut job. And so no Republican is ever going, not ever, but it's going to be a long time before San Francisco has a Republican representative. So it's likely that if someone was to lose, they would lose to someone farther on the other side. So we have a sort of self-reinforcing political environment around this that AI contributes to. How do we solve that sort of different reality, different views is not, you know, AI can serve up all the diverse news feeds I want. I get Fox News in my news feed. I don't read them a whole lot. So there's good news here, maybe, is that these technology companies are geographically aligned to places in the United States that reflect our San Francisco values. So they care that their photos of CEOs didn't represent aspirational reality of what we would like that cohort to look like. They didn't like the fact that three black teenagers showed grouchy-looking teenagers versus white teenagers look happy. So they went to go fix that. If the companies had been based elsewhere, they might have had different priorities. So I gave another talk a couple of nights ago where someone was talking about Twitter and the free speech absolutism that was coming up to Twitter and asked sort of opinions on that. So I don't like it, but what I like less is that decision is that we are deciding to allow corporate executives to make that decision. That's a societal decision. Back to privacy. We need to provide guidance around that. Yes, I would like that Facebook and Google and everybody else would take down all the content that I find hate speech and all the people in the rest of the country should be protected freedom of speech. I would like them to do that, but I'm super uncomfortable with tech billionaires being the enforcers of truth. I just don't think it's their business just because they happen to agree with me because of where they're based. That could change. And it is changing. So these are the biggest western companies. I guess I put ByteDance. Wow, I did not put ByteDance in the right place. Exactly. I think what happened was this is a new slide, obviously because of ByteDance. I think I grabbed the logo and forgot to put it in the right place. There are companies that are based in areas with different world views. I'm not going to make a judgment of who's right or wrong. I don't want ByteDance and Tencent providing me my social guidance. And I don't think Apple should be providing theirs. It's not the tech executives' prerogative. I have a lot longer rant on that that I can get into, but I will not. This is my last slide. This is my last one. This kind of comes down to the polarization. This is my last slide. Start thinking of questions. Polarization piece. One of the challenges around AI, there's a lot of killer robots and bias and eating jobs, which are all legitimate challenges. But I think there are known solutions around that. You just have to have the will to go do them. A bigger challenge is we now have commodity technology that can do really bad things and can empower bad actors to do them a scale that was unbelievable before. Manipulation. There's lots of talk about deep fakes. I'll get to more examples in a second. There's a lot of talk about deep fakes and getting Biden to say things that are clearly offensive as a way to undermine his political position. They're kind of clergy. If you look at these, they're sloppily done. They're not super convincing. Some are. I don't know if people have seen the ad the Republican Party ran, and it's clearly generated. They've made fake news stories about Biden's second term. San Francisco has been, National Guard has come into San Francisco to stop the fentanyl crisis, borders being overrun. They have all this great imagery of awful things happening. Let's imagine that I wasn't that clear about it, and I'm not that clergy. Right now, you see a video, the Pope with a funny jacket. But let's imagine a multimedia campaign that has audio tapes, that has video, that has news stories from a presumably reputable source. You have movement data of cars, you have video of the cars at those locations. Things that state actors could go do, they could put together many, many, many, many pieces of very compelling evidence for things that never happened. And that is now, can be done on my PC at home. We have, you know, scenario, you know, part, we have another example is drones. So people can, you get, you know, palm-sized drones, get a thousand of them, put them together in a swarm. We currently use drone swarms for entertainment. So you see lighting around buildings where they make nice patterns. Well, you know, we, in the war in Ukraine, we're seeing a lot of drones being used to drop munitions. Well, let's put anthrax on some drones and let's fly them into the Super Bowl. That's doable for a few thousand dollars probably, I don't know. And so those are scenarios where you have AI, not because AI is bad, but because you have bad actors that will look to weaponize that. And the cause of that is you have people who are alienated, they're disenfranchised, they see society going off in a different direction, and they're willing to do very bad things. So, you know, in the United States, the weapon of choice for disgruntled people seems to be assault rifles. But, you know, you imagine that it could be some weaponized AI. So, with that negative downer, I will end, but, you know, the solution to that is how do we fix the underlying problems that make people angry enough to go take out... Actually, that's hard to work. They want to check the room. Anyone have something like that? Maybe alternate physical, zoom, physical, zoom, physical, zoom? Okay. I think it back there. Yeah, so what's the... Artificial general intelligence, existential threat to humans. My examples were point in time, search engines, images. We now have generative AI. I talked a little bit about generative AI and media manipulation. I am not at all worried about AI causing an apocalypse. You know, AI is not going to go... There's killer robot, you know, we can't even get cars to drive on the street. We can't get... We're not going to have killer robots. As I... And at the end, I am somewhat worried about killers becoming more efficient by using AI. Autonomous rogue AI is the least of my worries. We all probably know the Turing test when it's intelligence is based on conversation. My favorite current example of when we have artificial general intelligence is Steve Wozniak. The example he gives is when a robot can walk into any home in America and make a decent cup of coffee. And so just imagine all the steps that needs to get into my house, needs to get past the security in my building, needs to open the door, needs to find my coffee, needs to know how I like my coffee. I drink decaf, not caffeinated. These are a lot of things that we... Long time off. Zoom. Zoom question. So is chat GPT good, bad, or ugly? I think it's good. We have some things to work out around that. Some examples. People talk about chat GPT hallucinating and giving wrong results. That's just an expectation-setting issue. Chat GPT is not going to give... It's not optimized to give accurate results. It's optimized to string together coherent sentences. It does a very good job of that. Everyone is confused when they read a historical novel. And it says, oh, my God, these events actually happen. These people said these things. They read a historical novel to sort of understand the setting and get a feel for the time and learn something. People can use chat GPT to go learn something for future research to actually find out what's true. There's a lot of talk about the sort of job implications, particularly around creativity. I don't know if you guys think talk on it, but I think it's going to change what it needs to be creative. Simply stringing together sent strings of words that a lot of writers do. They just write well. That's probably not a creative task anymore. Writing a compelling story, being a journalist and asking a good question, understanding how to get to the heart of a matter. Those are very creative skills that a generative AI is not going to go do. They're going to go write words better than most of us. And I think the other positive is that it's going to open up a lot of jobs to people who hopefully bring some income equity for jobs that require good communication skills. Well, if you don't have good communication skills, there's gobs of jobs you can't do. If you don't have good communication skills in whatever the native language is, you can't do that job. So generative AI is going to allow people to go do those jobs. And as a colleague of mine says, chat GPT isn't going to take your job. Someone who knows how to use chat GPT better than you do is going to take your job. So the idea is learn to use chat GPT well. You know, go out to stable diffusion, golly, and put in a prompt, I have, you get some art back. My stuff sucks. You go look at some of the examples, they're really good. So clearly being able to harness generative AI is a skill. And we need to, and that will differentiate, that will be the differentiation of creativity. In the room, did you have a question? Who's in the next one in the room? Yeah. Yeah, so they are, oh sorry. Yeah, so thank you. Elaborate on cybersecurity risks as it relates to AI. They are, I often talk about that, but I really wanted to scope this down to AI only. They are somewhat different attack factors. So one is, what is automated, what is, what decisions are computers making for us? And then there are what are people doing when they are able to get into a system, to corrupt, to give false data, to like Oakland, to encrypt their data. There is a bit of an overlap in that you can potentially see scenarios where AI is used in two scenarios. One is to craft better exploits, to attack software better, that it will be able to essentially write code more efficiently, be able to test for vulnerabilities quicker at scale so that you will have more intrusions maybe. The reverse is also true. We could use AI to strengthen the security of our software to make intrusions less likely. So it's not clear to me how that pans out. There is a fraud scenario, which is closely related to that, which is it's very easy to duplicate someone's voice in a few seconds, maybe a minute of text, and you already have scenarios where people are being duped into paying ransom money for people who are supposedly abducted. So you now have my voice calling my grandmother along with some imagery, fake imagery of them abducting me. You have a local news story about a Western tourist being abducted, and you have a very compelling time urgent send the money now that I... It's not entirely cybersecurity, but fraud is related to that. What are ethical concerns around responsible AI development and deployment? And it comes back for me back to what I was talking about, the privacy example, or anything else along those lines is setting expectations at a international, probably we won't get it, national level for what is boundaries, what are the boundaries? So for me, there's probably three buckets of use cases. Things that we wish to disallow AI for, things that AI could be used for, but they need to be very carefully monitored. I'll give examples, and ones that are free for all that we don't care about. We don't care about, but we don't need to regulate. So things that we won't use it for. Most jurisdictions have stopped using facial recognition for law enforcement. Still doing a bit of it at border checkpoints probably may make sense, but if you're going to go do enforcement, there's just been too many bad scenarios around that. Areas where you might want to be careful about it is benefits distribution. Helping, if we have a government program, analyzing for potential fraud in benefits. There's certainly downsides to that, that we want to be really careful that we are not flagging a particular subgroup which we currently do in our benefits algorithms, I think women are much more likely to get questions. So we want to look at that carefully, but there's probably some issues to be gained there, money to be saved there that make it worthwhile to be able to deploy AI, but we want to be really careful around that. And then there's things like traffic routing. I don't think the government needs to get involved in how I get from point A to point B. Maybe there is, I'm using examples. So I think one is clearly defining what the buckets of unacceptable, acceptable limitations, and acceptable are. And the other thing that we need both, I think, at a regulatory level and a technical level for generated content is being able to track the provenance of how something was created. How do we, it's probably not possible to do to label, force labeling of generated content because you'll have bad actors who just won't do it. And also, it's difficult to enforce. But what you can do is you can have technically enforceable generation of unmodified content. So I am the author, I cryptographically sign a document I wrote, therefore we know a human wrote it. A image comes off of my camera, snap, has not been manipulated until it shows up in a news article. And you're seeing human rights organizations do that where they essentially allow no human interaction between a photography of a war crime and the actual investigation. So the metadata is not stripped off, it's not manipulated. We're seeing a lot of that in Ukraine right now, where photos taken at one place are attributed to another place, different locations, and it makes that investigation very difficult. Okay. So the question is roughly around the rapid progress that we're seeing in AI, and the challenges of it becoming super intelligent and smarter than humans, and then becoming unstoppable. So the question is about the rapid progress that we're seeing in AI, and the challenges that we're seeing in humans, and then becoming unstoppable. So Nick Bostrom has a book called Superintelligence, where he plays out a bunch of those scenarios. I think they're a bit fear-mongering personally, in my own view. But what his view is, the thing that he brings to the table that I think is very important, is we need to start thinking about these now because the transition time that we finally get there might be fairly quick. And you can't make your battle plans in the foxhole. You need, you can't do disaster recovery when you first exchange cards. So we need to start thinking about now. So I think that he brings that very much to light. Yeah. So I think as the first gentleman asked, the scenarios that we've seen around AI are very specific. We have search results, we have traffic routing, we have generating text. There is no general artificial intelligence. It is very nichy. And there is no known roadmap to get there. No one knows how to go do that. AI, AI can't do things that toddlers can go do. The time, it may happen someday. I don't think it will happen in my lifetime. So I'm not worried about the exponentially getting smarter and smarter and smarter so rapidly that it outpaces human intelligence. In some scenarios it does. Pattern matching. AI can go find fraud way better than people. Voice transcription is now getting better. AI can now understand words better than a human transcriber can't in some scenarios. So in niches it is better. But there is no I'm going to build in Nick Bostrom's example, paper clips better than humans. There's two things that need to get solved there. There's two big blockers to your risk, the scenario that you bring up. One is we need artificial general intelligence. Scientifically there is not a roadmap I believe there. And then we need you would then have to imagine better interaction between AI and the real world. And an AI that can defend itself and not be unplugged. Sorry Dave, I can't open the pod bay doors. You know those I think those are difficult to I mentioned sort of in past it might have been before you got here. We can't even get cars to drive on the street reliably. Autonomous vehicles and streets are a two-dimensional grid with traffic laws great signage and expected behavior. This is extremely constrained environment. And we can't even get cars to do that after how many bazillions of dollars have been invested in that. So you'll now try and do that in a wide variety of scenarios. Carrying a tripod upstairs to set up and do videotaping, just because it's the one that popped in front. There's not. But yeah, I think it's a long time off. And maybe I'm naive. Yes, we're five after so I can keep going a bit. But we're at time. So if you're on Zoom, thanks for dropping in and I hope it was enjoyable. Feel free to drop off. I wouldn't know if you did anyway. And if you're in the room and you have somewhere to go or you're bored feel free to head out. And thank you. Zoom me.