 Hi, everyone. Thanks for being here with me today. My name's Lauren Mefeo, and as mentioned, I'm a senior content analyst at GetApp, which is a Gartner company. I cover the impact of emerging technologies on AI and blockchain, or the impact of AI and blockchain on small and mid-sized business owners. And I want to start this presentation by taking us back to 2015. There was a tool called HowOld.net, where you could upload photos of yourself and have the facial recognition software predict your age. So I had some fun with this earlier this week, and uploaded a photo of myself taken earlier this month. And I was flattered to see that HowOld.net thought I was 24 years old, which is younger than I actually am. I was less flattered when I put in a photo that I had taken three months ago, and the same tool predicted minutes later that I'm actually 38 years old, which is a little far north of what I am at the moment. So HowOld.net made news in 2015 as an example of how facial recognition software is making strides, but missing the mark in some key areas. The challenge here is that machine learning gaffes aren't always funny. They can actually have serious unintended consequences for end users as a result of data sets that are too monolithic. One example is a product called Compass. Compass is a machine learning algorithm that predicts defendants' likelihoods to recommit crimes. It has, however, been shown to make bias predictions about who is more likely to recommit crimes. Research from ProPublica found that the tool was two times more likely to incorrectly cite black defendants as being high risk for recommitting crimes. It was also two times more likely to incorrectly predict that white defendants were low risk for recommitting crimes. And this isn't a hypothetical scenario. Compass is currently used by judges in over 12 US states, and it's used as a tool to do everything from figuring out whether people in jail should be let out on bail before they go to trial. It has an impact on the length of people's sentences. And so the consequences of this algorithm are very real for the people affected by it. However, we can't say that this is intentional. It's not likely that the engineers who built Compass put bias into the system. Rather, it's more likely that Compass was trained on a data set that hadn't been exposed to different faces, including skin tones. And that's an example of what Nicole Shadowing calls machine bias. It's programming that assumes the prejudices of its creators or data, and a lot of this is unconscious. So that makes it almost more dangerous because you're not necessarily sure when you're building the algorithms whether they're going to make the correct outcomes. And a lot of algorithms are also black boxed, which means that if you're not directly working on them, you can't necessarily assess whether they're unbiased or not. So in the limited time that we have today, I wanted to use this lightning talk to make a case for why you should add bias testing to your product development cycles. We don't have time today to get into the nuances of bias testing, but I do want to make the case that this is the right time to start doing it from both the product and development perspectives. AI is still in its early days, and it's a unique tool because it always keeps receiving and learning from the data that is inputted into the system. This means that in theory it can retrain itself to give more unbiased results, but it also runs the risk of reinforcing biases that were put into the system in the first place. Few AI products use bias testing today, but if you add it at the start of the product lifecycle and then add continuous testing throughout, it can alleviate some of the issues that we've seen with products that are built on AI architecture. And so that's something that I really want to hit home because AI has the potential to transform the way we all live and work in ways that we haven't seen on a mass scale, but that heightened opportunity brings heightened risk with it. So the best way to make sure that your algorithms are unbiased and that you're impacting consumers in a positive way is to design the algorithms with diversity in mind from the outset and have someone at a scientist's role who can constantly comb your algorithms to make sure that they're being exposed to a wide enough range of data that positively has an impact on the people who will use your products. Thank you.