 Thank you. Thank you, everybody. Great to be here. Tonight, we are going to be talking about product management ethics, and particularly ethics as we enter the age of AI. And we'll talk about why that's specific and why that's interesting in just a minute. Of course, my name is Drew Dillon. This is one of, I believe, four embarrassing photos of me throughout the presentation. All of these are blackmail-worthy if you ever need it. I'm currently an executive in residence at Coastano Adventures. You can find me elsewhere on the internet, Twitter, Drew Dillon, and Medium. I write as a product man. It's a mix of lessons about product management all the way from beginner and how to get a job to really, really esoteric stuff that I basically write for myself. My most recent job, if you checked out my bio, it was a VP of product data design, IT, business development, a little bit of everything at a little company called Anyperk. It recently rebranded to Fond right about the time that I was leaving. Anyperk or Fond basically is an employee engagement company. They seek to humanize the relationship between employee and employer. Prior to that, and probably better known, I was a director of product at Yammer. Who still remembers Yammer? We've got two Yammer OGs back here and John, who I talk to all the time. Yeah, so Yammer was one of the first of the recent crop of unicorn startups. You might think of it as being a dino-corn. Said to use that image. And then most recently, I'm now working at Coastanoa Ventures. It's a boutique seed stage, or seed in series A stage venture capital firm that really, really focuses on operational involvement and engagement with the companies. So it's kind of fun for me to use my product skills with a bunch of different portfolio companies. And if anybody has any questions about venture, have it take those way after the meeting. So today we are going to be talking about ethics. And not just ethics, really ethics in artificial intelligence. I have no formal training in ethics. I didn't learn anything about ethics in college or much beyond what I got out of Boy Scouts and what my parents taught me. I was a computer science major and what they teach to in computer science because they'd surveyed students who'd gone through the program five years before and realized that they were bad at a bunch of really important professional things. So we did end up taking public speaking for which I got a B plus. So I hope everybody's ready to enjoy my B plus oration on ethics and artificial intelligence. Also one thing before we get right into it, there's no way to talk about the downside of misuse of data or as we get all the way to artificial intelligence without talking about some pretty nasty societal issues that can be caused by bad data. Racism, sexism, creedism, classism, childhood obesity. You kind of have to talk about all of these subjects. I understand that can be extremely upsetting and totally feel free to head out. If that's upsetting you, I can provide notes that I want to include those topics later about the key takeaways. So with that said, move on. So I wish I could say that I'd always been some kind of ethical crusader that I spent my years in my career writing wrongs and setting things right. That isn't really the case. My interest in ethics actually only began at the beginning of this year and it began in kind of a weird way. So I'm a pretty obsessive self-improvement person. Last year self-improvement was very much about learning and that's part of where I got into the AI thing. This year is really about health. You might see my yoga mat up here. And the way I prepare for these things is by reading obsessively about them, by really preparing myself for a decision I'm gonna make or for some new habit I'm trying to create by really, really focusing on it. So a friend recommended this book, Salt, Sugar, Fat and I read it back in January and it was, it's pretty mind-bending. It's actually about the processed food industry and basically the engineering that goes into processed food, why our brains respond to it, why we crave it, all those kinds of things and then of course why it's not good for us. And of course if you're thinking about that book you have to think of just the overwhelming evidence you've heard of child obesity rates. You're just a couple of graphs on it but I'm sure there's no surprise to you that childhood obesity in the US is exploding. Processed food is a huge part of that and this book really tells you the story of how processed food got to be that way, how they positioned in stores so it does appeal to kids and all of these things and I was reading this book and feeling duly horrified by the negative societal outcomes of all the stuff that was being created but all along I felt this undercurrent of like bizarre inspiration because these people were really working on difficult challenges and every time they hit them they mean they managed to innovate their way above and beyond what they thought was possible. One example of this is the bliss point. This is Dr. Judy Monella. She is not part of the evil cabal necessarily. She's a food researcher at the Monell Institute that got brought in. Her research really focused on something that she called the bliss point which is basically you can measure it through brain scans, people's reactions to various foods. So if you look at salt, sugar and fat you can actually see too much obviously too much saltiness, too much sweetness, people's brains just don't work. Too much sweetness, it's more that the brain shuts down and doesn't process the additional sweetness so it's actually a cost savings to have less sugar. Saltiness obviously can get too salty but what she was able to find were these bliss points within different demographics, within children, within adults and then match them up as they work together. So as salt compliments sugar and fat that she could find the intersection of all these different bliss points in different demographics, different age groups, different regions and that's led to some pretty remarkable things. Let's look at some of the wonderful food that has come out of this research. Capri Sun is engineered to be as sugary as possible basically using natural flavoring but not so sweet that your kids won't respond to it, won't like it. Cheez-Its, they go into this a lot in the book but if you actually make Cheez-Its without salt and fat they taste like cardboard and they're really chewy and gross. Doritos, I don't think that thing's ever seen a vegetable or been around somebody who's seen a vegetable, they're just not healthy food. Lunchables also they go into really great detail in the book about how they're engineering basically a massive dosage of calories and sugar into these little meals that kids were taking off to school. It's telling the creator of lunchables refuses to have them in his house which is true of a lot of these folks. The executives at Process Food Companies do not eat processed food. And then to the great-grand-pappy, Coca-Cola. Coca-Cola is actually engineered to be different levels of sweetness in different regions. So Coca-Cola makes the syrup but they send different syrups and different mixes to different places. So in the Southeast, it's extra sweet. Somebody from California going to the Southeast tasting Coca-Cola is gonna taste something different and they got out here and it's gonna be a lot sweeter than they're used to. That's because they were able to go and locate a higher bliss point for that region. And so if you think about it, this is serious scientific innovation. The people doing this are using this research and creating billions of dollars. Here's just a comparison of Kellogg and General Mills over the past, basically since 1987. Most of this research was happening around here. So imagining that going all the way from, all the way to this point from back then. And ultimately, you think about what's going on here and I was like, okay, so I get this. I get that it's food and everything but it didn't really dawn on me and I swear it doesn't happen till 75% of the way through the book but it's the first time the author refers to one of these people as a product manager. And it kind of hit me like it's on a brick. It's like, oh, that's why I'm inspired. They're doing my job. They're driving the kind of real, scientific, technological innovation that I work for my entire career. They're creating billions of dollars in value and this person could easily be me. And that's where I figured it out. That's where I realized that ethics are important and that these things are things that people like us need to figure out. Because ultimately there needs to be some sort of value above and beyond creating shareholder value. There needs to be something more important for anything that we do. And creating shareholder value, delivering good product is just a single component of that. But of course we're all digital product managers, right? We don't actually, we're not making kids obese. We're not doing these kinds of things. We're not dropping bombs. We're not creating computers that count up death tolls. None of these things are really what we're focused on. We build digital products. It can't be that bad, right? Who's familiar with the near I all the hook model? Imagine anybody who's actually any of the product school students. So near I all is a behavioral economist. He builds what he calls the hook model. And it's pretty remarkable how often you'll see this in real life. What it starts with is basically some kind of external trigger. Something outside of the product gets the person interested in using it. So they take that external trigger and then it drives them into the product to take some kind of action. You need to be figuring out basically to make this model work, what action is then going to create value back for the user. The trick of the hook model is that that doesn't necessarily lead to a positive outcome. And different times it'll lead to varying levels of outcome. So if this person does, and the loop will sometimes short circuit and skip different parts, you might go back and forth here a few times. But what's important is you don't always have a positive experience. Because when you don't have a positive experience, it leads you back in search of that positive experience again. You'll want to keep going back to keep trying to figure out what the lottery was that got you that thing. And I've noticed this in, I get emails from Quora, Pinterest, they all have this component, this element of it, like, oh, that email was worthless. But I'll check the next one. It might be a good one there. And so whatever it takes to eventually drive you to a deeper level of investment. So when you're actually, now you're putting a lot of time and effort into using that product. Because only through that will you actually build up an internal trigger that will cause you to go back into the model and loop around and go through it again. What, let me slide this back a little. Messing up my mojo. What Ayol is very clear about is he modeled this off of not only product addiction, but actual addiction, real drug addiction. And he's very upfront about that and he talks about his own ethical issues with the hook model and companies that are trying to work around that. But if we think about it, technology, through an addiction cycle, drugs, heroin, through the addiction cycle, heroin uses the exact same pathways to get you addicted as your brains use for food. There's not a huge amount of difference in the bliss point for sugar and the dispoint for hard drugs or for products. So it's very much related. The things that we are doing use the exact same pathways as hunger and substance abuse. Just look at Facebook's insane numbers. So they've grown up to, this is now almost three years ago or two years ago, 1.5 billion users daily having a billion of those users with 65% DAO of RMEO. This is a stickiness metric. It used to be at 50%. Now it's up to 67. That's insane. No product in the world has that kind of loyalty. So looking at Facebook's numbers and understanding addiction starts to feel a little strange. Like do people actually even like this product or is it something they feel like they have to use? It ends up being the question. Certainly this guy is super curious about how to use Facebook better. And he's figuring it out. He's recognized some signs of addiction and is capitalizing on it. So now we're gonna get into the meat of it. We're gonna really dig into AI ethics and why in particular the ethical situation gets much more difficult and much more critical when we talk about artificial intelligence. So one way to think about a knowledge worker's job is through what's known as the Udalloop. Udalloop's invented by the military, but particularly the Air Force, but there are a bunch of different versions of it. Populized actually through Edward Deming and his work with Toyota manufacturing. Basically it goes through four very simple steps of process improvement. First, to observe, just take a look around and make an observation about the environment. In the case of Toyota, it's what's not efficient about our manufacturing process. Next, orient yourself, try to understand and figure out the data that you have, the biases that you're bringing to your decision. Then make a decision. Your decision really is a hypothesis of what we think will happen next. And then finally, taking action. Testing your hypothesis and then looping back around and observing, orienting, deciding and acting. The one way to think about artificial intelligence or the increasing use of data is through Udalloop's. It's through going from the people powered Udalloop which every product manager goes through pretty much every part of their job, to big data driven ODA because all the observation is being done by the data that you've been collecting. Machine learning, which gets you part of the decision so it's kind of a duh loop coming all the way back to AI. AI, ultimately its goal is to optimize itself. So we need to talk about and think about what happens when you take humans out of Udalloop's when there's no one reorienting, checking biases, figuring out if you have the right data set, checking the outcomes of the action before we go into the next round. Those are critical pieces and we'll talk about exactly why. But it's a brave new world. We're entering something that we haven't really done before. We haven't created intelligent agents before and largely because we haven't had technology or even interfaces with which to do that. So let's talk about interfaces. This is the first kind of challenge that comes up with ethics and artificial intelligence is interface isolation. So you can think of interface isolation, you can think about the web as moving from a very static environment to ever more dynamic interfaces. The static environment, way back when the very first websites were literally text files that sat on someone's computer and a URL was just a way of getting to that text file from your computer and that was it. That file didn't do anything. If I wanted to update it, I went and edit it and it hit save and boom it was live and everybody else can refresh and get the new version. Google through JavaScript has developed insane capabilities in interactivity of websites. They notify us, they get sent us flash messages, they have all kinds of things flying around the scene. They tailor themselves to our experience, to what we need, to what we did last time that we were on that site. So we've moved from a place where the web was incredibly static to the web being ever more interactive in web 2.0 and AI is just going to pour gas all over that fire. It's going to become whatever we thought was difficult about dynamic websites today is going to be significantly crazier in five years, five to ten years from now when the machines are actually tailoring. The site might look completely different for two different people. One person might get something that looks like Facebook and one person might get something that looks like Microsoft Outlook from 2003 because honestly my parents would prefer to be using. And what happens as we get those more dynamic interfaces, you've seen this I'm sure on the site over here is Facebook, here's Twitter. You can see Facebook friends with a specific political leaning, liberals don't really have that many conservative friends. Similar conservatives don't have that many liberal friends. Moderates can't even find other moderates to hang out with. On the other side, on the Twitter side, this is basically a graph of interactivity. How often Twitter users of different political leanings interact with each other, which is basically never. There's some brave folks here in the middle, the little blue star alerts I assume out here are all celebrities being trolled by somebody. But our products are actually pushing us further apart and it's really easy to just say, these users selected themselves that way because that's going on in society. You might have heard of the big sort. This is a sociological study basically about how mixed different counties in the U.S. used to be versus how monocolor they are now. So counties that voted Republican or Democratic by 20 percentage points or more. So if you look back in 1992, there actually weren't that many and especially even the really populous areas. Actually just didn't have that. Didn't have those big blowouts. But looking now in 2012, there are massive blowouts all over the place. The two sides don't even live together anymore. Some piece of this is the return of educated millennial types back into the cities. Some of it is just more mobility, not social mobility, but mobility outside of where you came from. And then just less community ties back to where the original place that you came from. But ultimately, everybody here lives in San Francisco. We have some extents unless you were born here, there's probably one or two selected to be here. And we're as much of a part of this as everything else. I'm from Virginia, which, you know, that little red piece right there. And I'm not there anymore, so I impacted this. So this is honest and true. And when Facebook and Twitter point this out, it's totally reasonable. But at the same time, they're also enabling it and pushing it further. So you can say that it's a societal thing, but it can also have compounding effects basically on the technology that we use. And so you can look at the separation. This is the same map, but now it's in Congress of the sides pushing more towards one another. You can look at the incursion of Russian actors into ad buying and into the last election. And it very leases as a head scratcher. You don't know exactly. It feels like something is off, but it's hard to say exactly what's off. So let's talk about Tay from my former employer, Microsoft. Tay was a brave little Twitter bot that went out there in the world. Tay's data inputs were from Reddit and Twitter, which are completely unmoderated communities. There's a dash of moderation in Reddit. But ultimately completely unmoderated communities from which to build her training set of data that she would use to figure out how to talk to people. So they launched Tay. And of course she came out super racist right off the bat. And then people found that by interacting with her, they could actually make her more racist and it got worse and worse. So Microsoft took it down, retooled, thought a little bit more deeply about what they were doing, relaunched Tay, and then she was a pothead. So through this simple conversational interface, they've managed to offend a lot of people, say that they're breaking all kinds of laws, and it's Corbett's Modus, sponsored by a giant 95,000 person company whose stock is crushing it right now. And then another really simple example. There's a soap dispenser that couldn't recognize a black hand. This continues on and they show like putting a piece of paper under there, a literally white piece of paper and it works, a white person's hand. And just let it sink in for a minute that this soap dispenser is basically making a judgment call about this person, either that this isn't a hand or they aren't a person. And that's the negative implication of interface isolation. We're pushing somebody into a different place than everybody else, benefiting the majority and continually hurting the minority. So first principle that I want to hammer away at is not letting interfaces isolate people. Interfaces are increasingly dynamic. Interface decisions are increasingly made by machines, replacing what ordinarily would have been product manager intuition or designer's intuition. So we must be designing systems with a North Star beyond personal preference and beyond metric optimization. Simple MAU, DAU numbers don't cut it when we're talking about real societal impacts. Next, bias data, another big challenge and part of what you saw in that soap dispenser issue was the data that they were using were ultimately a bunch of white people testing it. My buddy Otis, one of my favorite Otis quotes, all data has an opinion. You can't trust the opinions of data you didn't collect. What he means here, so you could think of something really, really simple. I've got a table and it literally just has people's names in it. And so I create a column for first name, column for last name. Then I launch in Japan. Japanese family names are typically the first in order, but everybody does get called by their personal name. So do you put their family name in the first name column or their surname in the last name column? What do you do? Somebody's going to make that decision. After that decision is made, this data is now biased to whatever that decision was. Did they collect the information about how that decision was made? Did they store it someplace and other people, when they use this data, are going to be able to figure out? This is a very simple example, but you are already screwed up on first name, last name. So by the time you're getting to deep judgments about people and what they're doing and criminal behavior and all kinds of fraud and all kinds of other nasty things, if you're pulling a bunch of other people's data, there's so many opinions in there that you're getting whether you want it or not, and they aren't necessarily the ones that you believe in or the ones that you would agree with. And the reason for that is ultimately we are subject to a hilarious number of cognitive biases. This is actually from the Wikipedia page that I went to to check out before creating the presentation because they used to have a running tally of number of discovered cognitive biases. They don't have the number anymore. It's gotten so deep that they just categorize them into big buckets of ways that we're generally wrong. The last I saw was like 1,500. 1,500 understandable cognitive biases, reasons that we will make stupid decisions or decisions that defy facts and don't necessarily lead to positive outcomes. So you take all of that human bias, you shove it into just terabytes of data, and then you upload it to a neural network that doesn't tell you what's happening, and you just assume everything's going to be cool, right? Kathy O'Neill, oddly enough, I found this book, Weapons of Math Destruction. After volunteering to do this presentation, so it gave me a lot of good material, thankfully, but a really good book otherwise. She talks about these Minority Report-style pre-crime systems. Everybody's seen Minority Report? It's an older movie. So in Minority Report, they developed a system that's based on the psychic kids that they have all hooked up that can predict crimes before they happen. In real life, the pre-crime systems aren't that intelligent. What they actually do is you load up all of your crime data for all the counties that you work with. They will go and find, basically, anything like time limitations, how often crimes occur in a certain area, what time they occur, so that they can have more feet on the street at the time that crimes usually occur. I haven't actually seen the outcome of one of these systems, but I have to assume they actually do push up, drive up arrests, and maybe they even actually stop crime. But can anybody think of a problem with that system? That's a big part of it. Anyway, think of specific reasons for why that might be racist? Exactly. So this system puts people where crimes often happen. Crimes often happen where there are very desperate people. Where there are desperate people, there is low income. These systems are bought, they're expensive, so they're only bought by large cities, whereby the low income neighborhoods tend to be minority dominant. And then you just have cops walking around all the time in that neighborhood. The other thing that leads to a high incidence of arrest is being around cops a lot. So this was a controversial study. I won't get too deep into the sub-piece of it, but what it ultimately said was black and Latino folks are pulled over 14 times more than white people. So regardless of whether they did anything wrong, actually the incidence of them having like possession of something illegal or actually having done something illegal is also significantly lower than the white population. So they're just getting stopped for no reason. Which means they're more likely to encounter some form of violence. This is a ridiculous chart because this is data provided by the police departments. They have self-delivered this information about stops where the person being stopped was white or black and whether or not they were compliant or non-compliant. So they were agreeing and being helpful. And then whether they ended up getting assaulted in some form anyway. And what they're basically saying is if you are black, you are 21% more likely to get touched by the cops just for listening, just for doing exactly what they asked. That's insane. And who wouldn't respond negatively if something like that happened? And then pushed into a wall, handcuffed, weapon drawn, pushed to ground, gun pointed. And I'll remind you that these people significantly less percentages of actually having committed a crime. So getting cops around people, around kids that are committing that are just on a regular basis is just going to drive up arrest whether no matter what color they are or what their socioeconomic status is. And the thing I should also point out is if we have cops spending a lot of time around teenagers, neuroscientists have basically said that teenagers are clinically insane. And the reason for this is the frontal lobe doesn't mature until you're 24 years old. The frontal lobe basically your system to thinking, judgment, impulse control, emotions, reasoning, problem-solving are not fully baked until you're 24. So you put a bunch of cops around a bunch of teenagers, they're going to do a bunch of stupid stuff. Look at this idiot. I mean, I want to arrest him right now. And I have to think about what my life and the ways that my life would be different. If somebody was around when I was doing borderline criminal behavior as an idiot teenager, I think that it would just be very different. My life would be very different if I'd encounter cops during those periods. Well, let's talk about HR. I just came from an HR tech startup. AI is a big hot thing in HR tech for reasons. AI is increasingly being used to look at employment data and seeing what will make you most successful within your career. One famous version of this happened within Fox News. They hired a bunch of consultants. Come tell us what makes a successful Fox News employee. They looked at all of the data and they said, well, it seems that people who come in a little bit older tend to be white and tend to be male are really successful at Fox News. So sweet. Let's just throw out 50 years of equal employment data and just go with what the facts tell us. Of course, I think that afternoon, there are nine sexual harassment cases listed on just the Wikipedia page for Fox News that date back to 2000. And so maybe there's another reason women were successful. Maybe it had something to do with the fact that people shouldn't stop grabbing them or saying inappropriate stuff to them all the time. Maybe that makes you not successful and want to leave the company earlier. Or maybe it means that people stand in the way of your career. And so the data being biased towards what had already happened, not addressing the societal elements underneath it, means that you could have created a self-perpetuating system. Another interesting example. So they are now, as currently right now today, there are satellites flying over various terrorist hotbeds. I'm sure they're over Syria. I'm sure they're over Iraq. I'm sure they're over Afghanistan. What they will actually tell you, and this is an interesting technology. The guy was talking at mpr. It sounded fascinating, is basically if you find a terrorist act and you can find anybody leaving the scene of that terrorist act, the ai can actually track them. They can track that person, follow them back to their house, follow them wherever else they go, start to catalog that information. It then figures out everybody else they're talking to. So it starts tracking those people. And then it starts tracking the people that they talk to. And then it starts tracking the people that they talk to. As you can imagine, this is super popular with the military in military situations and particularly in terrorism where they're really fighting like a network. But this goes back actually to what I came and went working on right out of college, which is group dynamics. Why people talk to each other, when they talk to each other, and how. And these things are very different. What you do need to make these systems work, however, is massive amounts of data. What I learned right out of college was actually the metaphor for this is sonar. So think of a sonar system and it's sending little blips out into the ocean. And it's just getting noise back, just random whatever happening in the ocean noise. And they need to find the singular submarine under the water that is the actual real threat. They don't care about the whales. They don't care about, you know, what's going on else or up on the surface. So it's the same thing within those prism systems. When they tell you they're collecting all the metadata from everybody's phone calls, it's because they have to. If they want to figure out how this person is connected to this person and what that terrorist group looks like, they have to collect all of the information. So the company working on this is very clear to point out that many municipalities have already outlawed this. It is not currently happening in the most major cities that I'm aware of. They are using it on the border and they are talking to international police enforcement about using this. System that can gather information about who you talk to, when you talk to. It can't arrest you yet. But it can file FISA warning warrants. It could put you on a no fly list. It could do a lot of stuff without you ever actually being told that you're a person of interest or that you might be arrested. Switching gears for a minute, I also read a book called The Gene by Siddhartha Mukherjee. This one was particularly interesting. He talks about this actually American scientist, Herman Mueller. Mueller was originally a very big proponent of what he called positive eugenics. Eugenics has a very negative term, but I'll give you the term that I think people thought of more back then, which is basically if you could find a genetic condition that is negative, you can, through genes, remove that condition. That seems like a good logical thing. What he found, though, and what the gene talks about a lot, is that good and bad were actually very murky and still are. So I'm just going to read this whole thing for you. I apologize for reading during a presentation, but it's pretty critical. Mueller began to realize that positive eugenics was achievable only in a society that already achieved radical equality. Eugenics could not be the prelude to equality. Instead, equality had to be the precondition for eugenics. Without equality, eugenics would inevitably falter on the false premise that social ills, such as vagrancy, pauperism, deviance, alcoholism, and feeble mindedness were genetic ills. When in fact, they merely reflected inequality. They talk about examples in the book where a woman was deemed feeble minded and forcibly sterilized by the U.S. government. It turns out she was just a little bit slower at reading. I got a bad education and grew up in a bad house, and it led to her being sterilized by the government. And that's the negative side effect of eugenics, and it has pretty direct correlation here in that equality had to be the precondition for eugenics. Similarly, I feel if you want to avoid interface isolation, equality has to be the precondition for artificial intelligence, and particularly in these municipal use cases. We cannot have the government just running off with data without any kind of oversight or without any kind of understanding of what biases may exist in their own data. Next we're going to talk about black swans. The other big gap you'll find within all of data is that all data is backwards looking. So the best predictive system in the world is still working off of data of only what has ever happened before. It can't tell you exactly what is going to happen in the future. When will the big one hit Northern California? We don't know. They've been predicting that since the last one. They were probably predicting it for a hundred years before that one. What do your self-driving car do when the bay bridge collapses? I don't know. I don't know what a self-driving car does when the bay bridge collapses. I do know that the new bay bridge has the exact same problems as the old bay bridge thanks to government contracting and single fault failures. So this is actually a realistic scenario for sometime in the next 20 years. What most likely happens is the autonomous vehicle probably stops. But what is also true is that it would be silly for me as a creator of an autonomous vehicle to build in a test that has the road collapsing. So the truth of the matter is when that happens and if there's an autonomous vehicle there, it will not have actually prepared for that scenario. So it has not dealt with this piece of data before, which means that we don't know what will happen. We don't know if the human would have been better or not. And I don't think it's any surprise to you. This chart of the strength and intensity of hurricanes only goes to 2009. We're now in 2017 and they're all like eating the world. That black swans are becoming more frequent and as they become more frequent they're also gaining in intensity. So the rare event is happening more often and when it's happening it's happening worse. So building a system that predicts a very specific set of things and then throwing in a new random variable there's just no way for an AI to really respond. And if you think about it, a 500-year flood is really a very stupid concept for a civilization that's only been recording history for 5,000 years. That's one tenth of the time that we've literally been writing stuff down. We don't know what a 500-year flood is or if we're about to experience the 6,000-year flood because we didn't write anything down back then. What we're ultimately talking about is whether a system is deterministic or stochastic. This is genuine AI lesson here. A deterministic system, a player's choice specifically decides what happens. When I say I want to move this piece, I move that piece, done. A stochastic system is probabilistic. I'm going to take an action and this thing might happen. I think this is what's going to happen but you could always land on the snake or the ladder and be in a different place. Digital systems are mostly deterministic. Computers do mostly what you tell them. Even when you told them the wrong thing, they do that thing. So digital systems are somewhat more trustable. Of course, if you get to the very end of Cisco support and they get down to the bottom of the decision tree, they'll tell you it's on spot, so it goes back to being stochastic. But by and large, digital systems are deterministic. Natural systems are stochastic. There may yet be a form of math that can define and figure out everything in the world that's happened in all free will ever since the, ever from the big bang to me saying, oh, a million times during this presentation. But we don't have that math. And so all these systems have so many variables that even if we observe something happening, we can't 100% say that that's exactly why it happened. Ultimately, Neo was irreducible. They couldn't figure out the math that would make the matrix work. Nate Silver, both a god and a demon to various people. While I appreciate Nate Silver and the work that he does in popularizing probability and statistics and driving data-driven decision-making back into the world of journalism and journalism, I believe, has improved significantly with the addition of better data ringer, I hate everything about the actual election tracking piece. If we think about this, Hillary Clinton has a 71% chance of winning. That means nothing. Like you've run it through a Monte Carlo simulation. It's given you the results 5,000 different ways. It's chopped it all up. By the way, he's basing his data on other people's data and then he's changing it again. So now we have double-opinionated data leading up to this calculation. Then we're basically saying that we have to run this experiment at least 1,000 times. So we have to have 1,000 elections minimum to get these kinds of statistics. And then at the end of the day, Hillary wins in 71 out of 100. Cool. That's done. Move on. That's what I call vanity metrics. They don't tell you anything. You don't learn anything. At best, polling data is trend-related. And the challenge there is it's a small sample size. It's the law of small numbers. We've had 56 elections throughout the history of the US. So doing anything 56 times is not going to lead to a good distribution, no matter how many polls you run ahead of time. If you flip a coin 56 times, it's not going to come up exactly 50-50. You have to flip a coin thousands and thousands of times before you're even going to get close. So the idea that you can accurately predict something with very, very low number of incidences is crazy. And of course, what I'm ultimately getting back to is we only have one society. We're just one earth. We don't get to try these things out and then like A-B test it. We're not Rick and Morty. We can't move on to a different dimension when we screw that one up. By the way, that's a stochastic system. There's no way of proving that they're exactly the ones who killed it. It's pretty clear, but even in the episode, it could have gone different ways. So we don't get to A-B test reality, which means that we can't really optimize a lot of really, really big important things. I can go into this pretty much forever. Nassim Nicholas Taleb writes really interesting provocative books about these things. Black Swan, fooled by randomness, is actually kind of like the technical counterpart of Black Swan. So it's actually the one that I read first. He is, I will say, very controversial. And I like controversy. I like to read controversy, but don't follow him on Twitter. He's a notorious Twitter troll. So what I want to do is a school. So I figured we have a little story time. This is a book on Zen Buddhism. Some friends bought my daughter. Let's see if I can flip open the right story. There's a story in here that they call the farmer's luck. But it reminds me a lot of these sorts of lessons. You can see she's bent all the pages over. All right. Everybody see any illustrations? Cool. So there once was an old farmer. He'd worked his crops for many years. One day his horse ran away. Upon hearing the news, his neighbors came to visit. Such bad luck, they said sympathetically. Maybe the farmer replied. The next day the horse returned. Bringing with it two other wild horses. Such good luck, the neighbors explained. Exclamed maybe, said the farmer. The following day the sun tried to ride on one of the untamed horses. It was thrown off and broke his leg. Again the neighbors came to offer their sympathy on his misfortune. Such bad luck, they said. Maybe, answered the farmer. The day after that military officials came to the village to draft young men into the army to fight in a war. Seeing the sun's leg was broken, they passed him by. Such good luck, said the neighbors. Maybe, said the farmer. You can see he's watching TV instead of doing anything useful. Which is to say that in a totally stochastic system, in a random system, we can't even necessarily even tell what's good and bad. We can't even tell good luck from bad luck. A broken leg might be the best thing to ever happen to us. I know the first time I got laid off was probably the best thing that happened to me. Had to be working in government. So, the next principle I want you to know is to make allowances for black swans. Attempted optimization of black swan events makes them worse, because you're optimizing for a condition that no longer exists, and as those conditions ramp up, things get worse and worse and worse. Another great example of this is watching high-frequency trading bots during the 2008 meltdown. They just didn't know what to do. They were just flailing all over the place. They just went online and bring their traders back in to do regular trading again. We barely understand probability, and we often base all of our decisions on very small sample sizes. And often we're just optimizing the wrong things. We just don't even know what's good and bad, and we just keep optimizing a thing that might even be a negative. Cool. So, next we're going to talk about the first of the ways to start really improving the system through feedback and experimentation. So, let's go back again to that pre-crime system. So, now we're thinking about that pre-crime system. One of the big things, one of the key takeaways of the book was basically that the pre-crime system is mostly work because they don't actually have human intervention. The author was a data scientist, and she's not unaware of the limitations, uses, and value of data. And what she points back to is that Facebook and Twitter and Google all have lots of tools for interpreting data, using it effectively, and putting human decision-making back into the flow. The challenge with the pre-crime systems, of course, is they're not hiring data scientists. They're not hiring PhDs. They probably have a couple IT folks on staff and maybe hire much of expensive contractors. But the human is not in the loop of these pre-crime systems because they don't have those humans. The tools all exist. They just don't have them and aren't using them. And so, what you'll get is basically an OODA loop that is continually optimizing into that inequality. So these folks in this poor neighborhood are also that poor neighborhood where a bunch of crime happens. We're now boosting a bunch of arrests in that neighborhood. Those arrests happen to be minorities, which tell us now the data says that minorities in this neighborhood get arrested a lot. So now we're going to send more cops to that neighborhood. And it's just a death spiral. The OODA loop actually drives you into more and more negative situations. The AI answer for this is what they call unsupervised learning where you basically have a generator or an adversary that's trying to come up with random data to fool the core component of the AI. And it tries to basically teach itself what randomness would look like, what a Black Swan event might look like. The challenge of GANs and any unsupervised learning is they're only as creative as we tell them to be. So if we don't think of the variable, then the GAN's not going to use the variable, then the system's not going to understand that variable. Even if it comes up and is really important, now you need feedback and experimentation again. So you can't just let machines just experiment on their own. And I'll tell you, if I interview 100 product managers, today I will see five of them with a rudimentary understanding of probability statistics and the ability to A-B test. That is the state of the art today. And we are the people who need to be telling these systems, providing the feedback and coming up with the reasons for them to do certain things. This is not a good showing. We're driving really hard off of a cliff, and 95% of the crew doesn't even know how to fly the plane, even if we get it together in time. So it's on all of us to make sure that we are constantly pushing ourselves forward in understanding usage of data, probability, and statistics, and that we are enriching our community and making sure that everybody's leveling up on this thing, because it's us who are actually going to be fixing these problems. This is a program for feedback and experimentation. So optimizing systems need probabilistic models and failure states that incorporate it. Product and systems thinking can fix it, but the whole of our profession needs to level up. Next, I want to talk about transparency and other tool in making sure that our AI are behaving correctly. So you've seen these headlines, I'm sure. I won't get too deep into the headlines just this second. Suffice to say that if you read both of these articles, you get pretty quickly to the point where researchers found, researchers heard, researchers looked into, researchers did a very specific thing. What that is telling you and what is actually true is that these systems don't report what's happening. They don't report their reasoning. Some of them actually can't. They come up with a solution and building it backwards would be too difficult technologically for them to actually say why they made that decision. Defaults are incredibly powerful. If you're familiar with Danny Rielli, he wrote a book called Predictably Irrational, and he talks about the move from some European countries for organ donation from opt-in to opt-out. So an opt-in checkbox saying, yes, I want to be an organ donor to an opt-out box saying, no, I don't want to be an organ donor. So he looked at that before and after. You can see the lower bars here are the opt-in programs. The higher bars here are the opt-out programs. And you can look at that and just say, well, maybe they're just tricking people. You tricked me into not checking that box when I wouldn't have before and now you're going to take my organs. But what Rielli gets to is actually way more interesting. He talked to people from different countries. He talked to people before the program and after the program and after the change in opt-in versus opt-out. The responses for the opt-in case are it's icky. I want to have an open casket. I don't want my family to know that I don't have a kidney. I don't know random stuff like that. You ask the exact same people why they chose not to be organ donors and you'd expect, oh, I didn't realize I had to check that thing. I didn't check it before. It's not what happens. They actually say, oh, I'm just naturally a good person. I just think I'm just more caring than everybody else. One checkbox and the wording next to it changed their minds about who they were. The default of checking versus not checking a box, the open empty unchecked box was so strong, a little form element, that it made them decide that they were different humans. So defaults are incredibly powerful. And now if you think about where AI is today, it went from the point where only the algorithm inventors could actually possibly understand what's happening with this thing. The people that wrote the very original neural network white papers. Then an area that we're just coming out of, I believe, where only PhDs can understand this thing. You studied in the right program with the right professor for the right period of time. You come out with a corpus of knowledge that helped you understand and would make you able to use PyTorch and TensorFlow and these kinds of things. We're getting past that pretty quickly. And you can see it today. I think Google and Facebook have both announced they're going to open up offices in Waterloo to start getting AI undergrads right out of school into their programs because the war for AI talent has gotten so severe. But what ultimately looking forward, it's going to get to the point where any engineer can use it. Any engineer of any quality. And what they're going to do, based on what we just saw, is they're going to use the defaults. So remember, those defaults are not transparent. They don't tell you about the AI's logic. If it's not transparent, we can't provide feedback or do the kind of experimentation we were talking about in the previous section. So let's come back again with the less... That one ended up being more sensational. That one ended up being less sensational. The Google AI thing is actually super cool. If anybody read that article, it didn't come up with its own language. It came up with basically a swap language, like a cognitive bridge between two different languages that represented an internal representation of what a horse might be. And that thing wasn't necessarily any human language. It was really cool. We can learn a lot about cognition through that. That's super neat. Facebook was actually, when you read into it, way more boring. They set up two bots through Messenger and had them basically negotiate with each other. They developed a market language, which might be terrifying, except humans invented market languages like hundreds of years ago. They are currently at work today at every stock market. You wouldn't understand what the people were saying. It's just more efficient if you're buying a bunch of stuff and saying the same thing to people all the time to just use shorter words. So both of those are such that the singularity is not going to kill us today. Nothing in those two articles is going to come back and haunt your dreams. Suffice to say, though, that researchers and very talented researchers needed to go into those tools and figure out what was happening in them. So moving forward, if that thing happened again, if those two computers start speaking a totally random language, the engineer who's trying to figure out what's going on isn't necessarily going to have the capability of understanding what's going on. It's really frightening. So next, mission, encourage transparency. So engineers increasingly rely on defaults. Defaults in this world aren't transparent. So the result is just myriad, opaque, and likely incorrect decisions that we need to be looking out for. The next and the most fun one, responsibility and by proxy liability, because that's how responsibility ends up working its way out in the business world. You might be sitting here thinking, why do I care about any of this? I'm not a data science person. I'm not an engineer. I'm not going to be building any of these things. Why do I actually care about any of this stuff? So let's go to a pretty simple problem. You're driving your car down the street. Five kids dart out in front of you. You're going really fast. Maybe it's late at night. You didn't see them. And you realize pretty quickly, if you keep going, you're going to kill the kids. So of course, your first thing is to look over and find a different place to put your car. But the only thing on the side of the road is a telephone pole. And when you sort of outside of the road, you're definitely going to die. So you've got a choice. Kill five kids or die. It's a tough choice. Most of you in the room are probably sitting here in a very quiet, comfortable environment thinking about it. I'm a good person. I will make the right decision. Maybe you're not thinking that. Keep that to yourself. But this one's pretty easy ethically. But let's say you were to buy a car. And the car was making a decision of whether or not it's going to kill the kids or kill you. And let's just say this is the car that kills you. And this is the car that kills the kids. That's a decision you had to make way in advance that had nothing to do with the decision you're making right then. And I guarantee you a high-end car maker is not going to advertise that they're going to kill you. It's just not going to happen. And the real secret that I did learn from my buddy John over here is if you get any AI engineer drunk at one of these autonomous driving companies, they're going to tell you, no one's going to buy a car that kills them. They're just not. So the default won't even be to give you an option. The default is going to be to kill the kids. This is where, typically, government steps in. Government is often slow as a feature. It's going to be particularly slow in this area, I believe, as well, because it will require all of our understanding of how we train and certify cars to be at the same level as the researchers who are working on it. And today, these are still PhDs or really hardcore undergrads. But let's look at a few more cases. So if Facebook lets users segregate themselves into smaller and smaller groups leading to isolation and conflict, whose fault is it? Is it Facebook's fault? Is it users' fault? Is it the product manager's fault? If an autonomous drone bombs a hospital, whose fault is it? Today, autonomous drones can't go weapons hot without a human involvement, but you know somebody's out there thinking about it. And if it kills a bunch of people in a hospital, whose fault is it the person who launched it? Was it the company that sold it? Was it the government? Was it the product manager who decided that was a good feature to add? And if a teenager is repeatedly arrested for typical bone-headed teenager stuff and they spend the rest of their lives in and out of the legal system, whose fault is it? Whose responsible? Whose liable? I think it can be really easy to just diffuse those things away and just say, well, you know, there's a lot of stuff that happens between when I created the product or when I came up with that spec and when I executed on it to make it seem like it's not really your fault. But thankfully, we already have an example. And you are seeing increasing scrutiny over this. Obviously, Google, Twitter and everything that happened through the last election cycle. You're starting to see the legislative action taking place and you're starting to see some court cases testing these things. But it's going to be a while. This guy is not running for office. This guy is really scared about liability. That is why he's out discovering the rest of America. And so those food scientists basically fell into a bunch of different camps. And the book goes all the way through exactly what happened. So deep remorse, which is my natural inclination, is how I would feel if I created the child obesity epidemic or at least did a bunch of work into that. Then there was the kind of, I was paid to do a job group. They were just executing. Their incentives were largely monetary and they were focused on revenue. They weren't thinking about a moral North Star and it wasn't their job to do so. And then there were the folks who basically said, we did nothing wrong. All this is individual choice. You can do whatever you want. And it's interesting to think about the fact that each of these actually in philosophy and politics have their own adherence. So there's not even like one clear ethical answer like, yes, this is the absolute right thing and no, this is the absolute wrong thing. It's kind of on all of us to figure out where we fit and really tailor what we do and our morals towards it. But I know just falling orders didn't work for these gentlemen. Anybody recognize who they are? It's the Nuremberg Trials. They're about to execute a bunch of Nazis. Their defense was they were just following orders. They were hung anyway. So responsibility ethics are murky and personal. Liability won't be murky. The courts have a very good way of figuring all this stuff out over time, but it is going to take time before we really understand who is legally responsible and it's on us to figure out what level of ethical responsibility we're going to take on in the meantime. Okay, let's wrap up. So you might have heard all of this and think that I'm some kind of crazy crank leadite who just wants to throw his sebo into the machine and stop it all. The truth of the matter is actually very, very far from that. My core belief actually that there is no good or bad technology. There is only technology and the people who are doing stuff with it and those to me form the unit of responsibility. Even in the case of AI where the AI are making decisions by themselves, they're making decisions based on how we told them to make decisions. It's more like parenting and it is like writing a software program. We can tell it how to behave and we can hope that it behaves that way and we can expect it to behave that way and it will be more deterministic than we're used to and then you get out of kids for sure. But it is ultimately going to do exactly what we're telling it to do. And if we're not thoughtful, AI can lead to, and in the cases that we've been talking about, greater isolation and unequal treatment. Bias data becoming a self-fulfilling prophecy of that negative death spiral oodaloop. Leaning into and intensifying tragic disasters, unpredictable stuff that hasn't happened in the past or of a magnitude we haven't expected before. And then systems that malfunction without any human input that go off the rails and are making decisions but maybe even making decisions that we weren't expecting. And then finally with the lack of transparency, the inability to even understand the malfunction, why it even happened. So you might be thinking from all of this that I'm about to present to you with the grand plan. I don't have a grand plan. You're my grand plan. Ultimately, if you've taken nothing from this, it's that people are the solution to AI going wrong. And people, and by which I mean product managers, are ultimately, I'd like to think of product managers as engineers of the squishy human computer. And if we are doing our job right, and if we are thinking about these things ethically, then we are the ones that can actually go out there and drive change. We don't have to follow the revenue north star. I know it's hard if you're inside a company to not do whatever your boss tells you, but I believe me it can be done. And the benefits are huge. So my mission for all of you, don't let interfaces isolate, understand the biases in your data, make allowances for black swans, program in feedback and experimentation, and then make sure the whole thing is transparent so you can understand every part of it as it's operating. Take your picture. We can pause on that one, but that's my presentation. Thanks so much for coming and thanks, Product School. Questions? And of course you can come up afterwards. We'll do questions if you wanted to come chat. So pushing back against goals that a company might have in terms of what we're focusing on, what we're building, why we're building it, financials and all that stuff is sometimes pretty daunting, but when your boss tells you to kill the kids, like where do you get the conversation started there about, how do you push back against bullshit? So the question, repeating for the microphone, the question was effectively how do you push back when your boss is basically pushing you toward some kind of perverse incentive when the company is aligned around some kind of perverse incentive that you believe will not make the thing act ethically? I think oddly enough when I took a computer science, we weren't in the engineering school. The engineering school had a whole professionalism class and a big part of it was basically on whistle blowing and effective whistle blowing and when to do it and what they actually teach you in engineering school is never to whistle blow. I don't agree with that. I think what you need to be able to do is level up the conversation above the near term tactical execution of the thing. I think it's of course very hard to push back against MAU and DAU and things like that, but Slack did something that I thought was really, really interesting and positive. So basically Slack made it so if you turned on notifications on your phone or on your desktop app, you're going to stop sending you emails. I know for a fact I've run this experiment before that actually decreases engagement. I will actually drive down your daily active and monthly active users. In my case, I told you I was not the ethical crusader of the past 10 years. I went with the one that performed better, but Slack actually made an ethical decision. And part of that's about I think the leaderships they have, they work in an ethical company, one that spends a lot of time in diversity, but at the end of the day there was somebody who said maybe it's about finding like-minded people and people who understand broader societal issues. Maybe ultimately it's just not a company you can work at sometimes. Technology advances so fast with AI and machine learning. You have algorithms which don't pretty much read all your browsers and everything. Where do you draw a line being a part manager? I think that should be something for the government getting regulations, or do you think a part manager should be able to draw a line saying, okay, this is the personal data or this is where the public got the information? Yeah, so the question was basically where do you draw the line on engagement, gathering data, and that kind of stuff. Do you wait till the government shuts you down and doing something, or do you try and self-moderate ahead of that and where do you know where that line is? That's a good question. I think one of the early principles of Facebook, I think they've ditched this principle like Google ditched Don't Be Evil, was basically don't do anything you wouldn't write and I think that kept them from doing a lot of the really shady stuff up front. Of course, now they're buying data from Alokwa and they emailed everybody all the data that they were purchasing about you last year, even if you weren't a member of the site, all that fun stuff, but that to me is a good beginning of morality is like if I wouldn't brag about this to somebody, I probably shouldn't be doing it. Yeah, so the question was basically whether it was reasonable to develop a community on the product team within your organization to tackle ethical issues. I believe that's a really good route and I think part of being a good product manager is just convincing people and finding ways to get people to agree with your idea of the consensus and as a manager, I find one of the most helpful things is finding little, basically representatives of the company within each team, like this person is representative. This person, when they speak, they speak to the sales team, so when I speak to the sales team, I'm going to go check it with that person first so it's much easier conversation and I think that's critical in talking about ethical issues. Product manager ultimately can't do all the stuff on their own. I focused it on product because it's product school. The one caveat I'll put to that is lawyers are a little bit behind the eight ball just as a profession and in general with technology, so I don't know that the lawyers and that's not going to be all that helpful, but other groups I think could be more cutting edge I'm curious how big of a conversation is this in Silicon Valley or in fact in general? So the question is how big of a deal is our AI ethics in Silicon Valley. I will say it's, you find weird pockets. I saw a couple of these articles at the beginning of the year basically about institutional racism and how machine learning was making it worse, but I didn't see really anything come out of that. So I don't know how much these specific issues are. The one, of course, that everybody should be thinking about is basically the singularity. There are currently several companies working right now to create the singularity event by which a computer basically has the same cognitive abilities as a human and can make all of its own decisions completely and they're working on the singularity because they think if we don't create multiple singularities they are going to kill humanity. So that ethical issue is actively being worked on by several companies right now. It's very sci-fi and a little crazy, but that's more where I think the ethical stuff has been focused today. I think you talked about it a little bit in your presentation and probably even concluded with it, but how do you forecast that something that you as a product manager are developing is going to somewhere far down the line come to an ethical crossroads? Like for example, that self-driving car probably when you first conceived it, you just wanted to make sure it's not going to kill your passenger. You didn't foresee that it was going to. I mean you probably could have, but in a lot of cases it's going to go through a lot of limitations and you don't know where it's going to go. Right. Yeah, and then so the question there was basically how do you know that your product is going to create some kind of ethical conflict later if you worked on it way, way back? To me that's part of the Utiloop and making sure that you still have humans in the Utiloop because eventually they're all operating. They're all observing, orienting, deciding and acting over time and the iterative process of creating that product. You didn't reach the ethical issues on day one, but you will reach them eventually. So just programming in the things that will kind of tip you off to them I think is more important than trying to predict everything ahead of time. It's worse. I mean I go to office and engagement is like the top metric. Yep. How do I work on building interfaces that don't isolate people when I'm working by the sheet on personalization engagement? That's a good question. So the question is basically how do you balance this out if your core metric really is engagement and personalization? To me, and I don't have a great answer, it's kind of the same thing. If you're working on a specific metric it's trying to find north star metrics that are outside of that. I think Facebook is right now trying to figure out if people are happy using Facebook and it's doing a lot of experimentation around that as kind of a different north star metric than just raw engagement. What I will say though is eventually you hit some kind of cliff. You hit, and there's a really good actually old Facebook blog post that showed local maxima and then relative further maxima that you could achieve if you actually just changed your paradigm of how you're thinking. To me like as much data as they've all gathered around me they haven't gotten significantly better. Myself, my own personal ROI on these platforms has not gone up for the volume of data they're collecting about me. And so I have to assume it's somewhat similar. There was a great, there's a, I'll try and find it and post as everybody can see it later, but there was a great blog post basically about starting with a zero trust system. And the next step after that is only collecting the data that is absolutely actionable to you right then. So why capture stuff you're not going to use? That to me, that's what I call big data, the data that sits off in the corner and nobody does anything with it until somebody decides to plummet later. I want data that is actionable and because of that I want only what is really, really necessary to make this specific personalization feature work, which means that we shouldn't be collecting everything, which means that we shouldn't be operating on metrics that would further drive people apart.