 Hi, everyone. Thanks for being here and spending your lunch with us today. As Jason mentioned, my name's Lauren Mateo. I'm a senior content analyst at GetApp, which is a Gartner company where I cover trends in cloud project management software and emerging technology trends for small and mid-sized businesses. As Jason mentioned, I also just wrote a piece for opensource.com on the subject I'm going to talk about with you today, which is how to erase unconscious bias from the data sets used to train AI-powered products. And to do that, I want to start by taking us back to 2015. There was a tool called Howl.net. It was a website where you could upload photos of yourself. And the software would spit out recommendations or guesses for how old it thought you were. So I had some fun with this in August and put in a photo of myself that had been taken that month. And I was flattered when the site thought that I was 24 years old, which is younger than I actually am. I was less flattered when I put in a photo of myself taken in May and it thought that I was 38 years old. So this was used as an example in the press of how facial recognition software means well, but it still has a long way to go before it has 100% accuracy. The problem is that machine learning gaps aren't always funny like this. They can actually have pretty serious consequences for end users when the data sets that are used to train these machine learning algorithms aren't diverse enough in terms of the data they take in. So one now infamous example of how machine bias can hurt users is a product called Compass. It's a machine learning algorithm that predicts defendants' likelihoods to recommit crimes. And research from ProPublica, which is a nonprofit journalism outlet, found that it has made biased predictions about recidivism based on race. Their research found that Compass has been two times more likely to incorrectly cite black defendants as being high risk for recommitting crimes. It has also been two times more likely to cite white defendants incorrectly as being low risk for recommitting crimes. And Compass isn't a hypothetical product. It's used by judges in over 12 US states. And its flawed results have impacted everything from whether defendants were let out on bail or held before trial. It also impacted the lengths of their sentences in many cases. And it's really become an infamous case study in how machine learning can use inference to draw biased connections between data points in a system. But that brings up an important question. Was this done on purpose? It probably wasn't. In fact, race wasn't one of the variables that the algorithm accounted for. It's more likely that this was an unconscious mistake. The Compass algorithm probably didn't get enough training to recognize diverse kinds of facial variations, including skin tone. It's also possible that the algorithm drew from historical data about rates of arrests between people who are black versus people who are white and incorrectly correlated skin tone with rates of recidivism. Part of the problem is that we actually don't know that for sure, because the creators of Compass refuse to disclose how the algorithm works and how it comes to its decisions, citing proprietary reasons. And that means that it's a black box algorithm. And this is one of the big issues we currently have with transparency in AI. Oftentimes, the creators of these algorithms can't explain how they work. But we can infer, based on the flawed results of products like Compass, that machines have their own biases, just like people. And that's really what machine bias is. It's programming that assumes the prejudice of its creators or data, even if that prejudice is conscious or not. So I really wanted this talk to be a call to action for why you should add bias testing to product development life cycles if you're working on ML algorithms that are used to train AI products. As Howell.net shows, AI is still in its earliest days with a lot of room for improvement. And it's a unique technology because it's constantly receiving and learning from new data. The risk you run there is that it could either reinforce early biases within the system, or it could learn new biases in production and then refine its results based on that data. We have relatively few products that are powered by this type of technology today, and even fewer product teams use bias testing today. But if you appoint someone like a data scientist or a product manager on your Product Dev team who can own the health of your data sets, we can go a long way towards solving this problem and ultimately building AI-powered products that are more equitable and benefit all users, not just a select few. Thank you.