 Hi, my name is Kush Varshni and I'm a research staff member with IBM Research AI. And I'd like to tell you about our NIPPS paper that relates to reducing discrimination in decision-making. So my group and I developed a new algorithm that pre-processes training data to make it more fair. And as we all know, artificial intelligence and machine learning are starting to be used in every sort of walk of life. So that includes high-stakes decision-making, including loan approvals, prison sentencing and so on. So the algorithm that we've developed takes as input a training set that might contain discrimination for various reasons, according to certain protected attributes, which could include race, gender and so on. So the idea is to pose an optimization problem in which we have one objective and two constraints. The objective is to minimize the difference in a statistical sense between the input training data and the output clean training data set that we're creating. The first constraint is a group fairness constraint. And in that what we're trying to achieve is statistical independence between the protected attributes and the class label or the decision. And finally we have an individual distortion constraint, and that's on the sample level, again some sort of distance. And the reason for that is to minimize the changes for individual people or individual samples because we don't want small changes to be magnified into huge differences in outcomes. So with all of that we pose this optimization problem and produce a clean data set that can then be used by any AI algorithm afterwards as training.